A Panorama of Post-quantum Cryptography

  • Paulo S. L. M. Barreto
  • Felipe Piazza Biasi
  • Ricardo Dahab
  • Julio César López-Hernández
  • Eduardo M. de Morais
  • Ana D. Salina de Oliveira
  • Geovandro C. C. F. Pereira
  • Jefferson E. Ricardini
Chapter

Abstract

In 1994, Peter Shor published a quantum algorithm capable of factoring large integers and computing discrete logarithms in Abelian groups in polynomial time. Since these computational problems provide the security basis of conventional asymmetric cryptosystems (e.g., RSA, ECC), information encrypted under such schemes today may well become insecure in a future scenario where quantum computers are a technological reality. Fortunately, certain classical cryptosystems based on entirely different intractability assumptions appear to resist Shor’s attack, as well as others similarly based on quantum computing. The security of these schemes, which are dubbed post-quantum cryptosystems, stems from hard problems on lattices, error-correcting codes, multivariate quadratic systems, and hash functions. Here we introduce the essential notions related to each of these schemes and explore the state of the art on practical aspects of their adoption and deployment, like key sizes and cryptogram/signature bandwidth overhead.

1 Introduction

In the 1990s, Peter Shor introduced new concerns to cryptography. He discovered a quantum algorithm able to factor large integers and compute discrete logarithms in finite fields in polynomial time, more precisely O(log3N) [84]. These concerns are due to the fact that the security of conventional techniques used in asymmetric cryptography is based precisely on these or related problems (e.g., RSA, ECC) [59, 81].

Another even more effective threat are the recent discoveries of classical algorithms for solving certain discrete logarithms used in asymmetric encryption [6].

Fortunately, there exist cryptographic schemes based on different computational problems that resist the known attacks with quantum computers. They became known as post-quantum cryptosystems: this is the case of cryptosystems based on lattices [42], error-correcting codes [55, 64], multivariate quadratic systems (\(\mathcal{M}\mathcal{Q}\)) [28, 48], and hash functions [28, 29], excluding symmetric encryption buildings in general.

Given the new paradigm of Internet of Things, in which any object is able to connect to the Internet. A side effect of this interconnectivity is a possible vulnerability of these embedded systems. Attacks that have been primarily aimed at PCs can be launched against cars, cell phones, e-tickets, and RFIDs. In this scenario, the devices are typically characterized by shortages in energy supply (via battery) and limited processing power, storage, and often communication channels with low bandwidth (e.g., SMS).

Since embedded systems are typically deployed on a large scale, cost becomes designers’ main concern. Therefore, security solutions for embedded systems must provide low cost, which can be achieved with tools that minimize overhead transmission, processing, and memory occupation. In this sense, symmetric encryption techniques already attend the required metrics, and asymmetric encryption is the bottleneck in the most cases.

Asymmetric cryptographic primitives for encryption and digital signatures are essential in a modern security framework. However, conventional techniques are not efficient enough in some aspects, which makes them unsuitable for embedded platforms, specially highly resource-constrained ones. In this context the absence of costly operations (operations with large integers, especially modular exponentiations) of post-quantum techniques makes them more attractive in such scenarios, as described previously.

The objective of this chapter is to introduce the basics of the main lines of post-quantum cryptography research (hash-based signatures, \(\mathcal{M}\mathcal{Q}\) systems, error-correcting codes, and lattices), as well as presenting the latest research focusing on improvements regarding key sizes and signature/cryptogram overheads of these schemes.

2 Hash-Based Digital Signature Schemes

Hash-based digital signature schemes became popular after Ralph Merkle’s work [56] in 1979. The scheme proposed by Merkle (MSS) is inspired by Lamport and Diffie’s one-time signature scheme [49]. The security of these signature schemes depends on the collision resistance and inversion resistance of the hash function used. The scheme (MSS) is considered practical, and although there is not a proof, it is believed to be resistant against quantum computers. The disadvantage of one-time signature schemes is that a key pair can only be used for one signature, although this signature can be verified an arbitrary number of times.

2.1 Hash Function

Cryptographic hash functions are used in security applications such as digital signatures, identification data and key derivation, among others. Formally, a hash function \(h:\{ 0,1\}^{{\ast}}\rightarrow \{ 0,1\}^{n}\) takes as input an arbitrarily long string m and returns a fixed string r of size n, the hash value (i.e., r = h(m)).

Since the image {0, 1}n of h is a subset of {0, 1}, it is easy to see that more than one message is mapped to the same hash value (or digest). Some applications require that it be computationally infeasible for an attacker to find two random messages that generate the same digest; one such example is that of digital signatures, in which the hash of messages are signed, not the messages themselves.

2.2 Properties

The primary properties that a hash function \(h:\{ 0,1\}^{{\ast}}\longrightarrow \{0,1\}^{n}\) should possess are pre image resistance, second pre image resistance, and collision resistance:
  • Pre image resistance: Let r be a known digest. Then, it should be infeasible to find a value m such that h(m) = r.

  • Second pre image resistance: Let m be a known message. Then, it should be infeasible to find m such that mm e h(m) = h(m).

  • Collision resistance: It should be infeasible to find a pair \(m,m^{{\prime}}\in \{ 0,1\}^{{\ast}}\) such that mm and h(m) = h(m).

Another desirable property for practical applications is that the hash function be efficient (speed, memory, energy, etc.) when implemented on various computing platforms (hardware and/or software). It is easy to see that a function that is collision resistant is also second pre image resistant, but the reciprocal is not necessarily true.

2.3 Construction of Hash Functions

The design of hash functions has been based on various techniques such as block ciphers [54, 78, 93], the iterative Merkle-Damgård method [56], the sponge construction [16], and primitive arithmetic [25].

Standards based on these functions have evolved, mainly due to successive attacks advertised in the literature and specialized events. Recently, the National Institute of Standards and Technology (NIST) selected the Keccak [15] algorithm as the winner of a 5-year competition to create a new Secure Hash Algorithm standard.

2.4 Signature Schemes

A signature scheme SIGN is a triple of algorithms (GEN, SIG, VER) which satisfies the following properties:
  • The key generation algorithm GEN receives as input a security parameter 1n and produces a key pair (X, Y ), where X is the private key and Y is the public key.

  • The signature generation algorithm SIG takes as input a message M ∈ { 0, 1} and a private key X and produces a signature Sig, denoted by \(\mathit{Sig} \leftarrow \mathit{SIG}_{X}(M)\).

  • The signature verification algorithm VER takes as input a message M, a signature Sig of M, and a public key Y and produces a bit b, where b = 1 means that the signature is valid and b = 0 indicates that the signature is not valid.

2.5 One-Time Signature Schemes

One-time signature schemes first appeared in the work of Lamport [49] and Rabin [27, Chapter “Digitalized signatures” by Michael O. Rabin]. Merkle [56] proposed a technique to transform a one-time signature scheme into a scheme with an arbitrary but fixed number of signatures. The following describes the Lamport [49] and Winternitz [57] schemes.

2.5.1 Lamport-Diffie One-Time Signature Scheme

The Lamport-Diffie one-time signature scheme (LD-OTS) was proposed in [49]. Let n be a positive integer, the security parameter of LD-OTS. LD-OTS uses a one-way function
$$\displaystyle{f:\{ 0,1\}^{n} \rightarrow \{ 0,1\}^{n}}$$
and a cryptographic hash function
$$\displaystyle{g:\{ 0,1\}^{{\ast}}\rightarrow \{ 0,1\}^{n}.}$$
LD-OTS Key Pair Generation The signature key X consists of 2n bit strings of length n chosen uniformly at random:
$$\displaystyle{X = (x_{0}[0],x_{0}[1],\ldots,x_{n-1}[0],x_{n-1}[1])\quad \in _{R}\quad \{0,1\}^{(n,2n)}.}$$
The verification key Y is
$$\displaystyle{Y = (y_{0}[0],y_{0}[1],\ldots,y_{n-1}[0],y_{n-1}[1])\quad \in \quad \{0,1\}^{(n,2n)},}$$
where \(y_{i}[j] = f(x_{i}[j])\), 0 ≤ i ≤ n − 1, j = 0, 1. LD-OTS Signature Generation The digest d of a message M is signed using the signature key X. Let \(d = g(M) = (d_{0},\ldots,d_{n-1})\). The signature of d is
$$\displaystyle{\mathit{Sig} = (\mathit{sig}_{0},\ldots,\mathit{sig}_{n-1}) = (x_{0}[d_{0}],\ldots,x_{n-1}[d_{n-1}])\quad \in \quad \{0,1\}^{(n,n)}.}$$
The signature Sig is a sequence of n bits strings, each one of length n, and the signature elements are chosen from the key X according to the bits of the message digest. Then, for each bit di of d, it selects the corresponding string of length n from the private key X; that is, the algorithm selects \(x_{i}[d_{i}]\). To verify a signature, the verifier first computes the message digest \(d = g(M) = (d_{0},\ldots,d_{n-1})\) and selects yi[di] from Y, the verification (public) key. Then, it checks whether
$$\displaystyle{\mathit{Sig} = (f(\mathit{sig}_{0}),\ldots,f(\mathit{sig}_{n-1})) = (y_{0}[d_{0}],\ldots,y_{n-1}[d_{n-1}]).}$$
If the signature is valid; otherwise it is refused. Observe that signature verification requires n evaluations of g while signing requires no evaluation of g.
Figure 1 illustrates the Lamport scheme. In this example, the one-way function used was \(f(x) = x + 1\) mod 16. For the public key generation, 2n evaluations of the one-way function are required, one for each element of X. For signature verification, n evaluations of the one-way function are required, one for each element of Sig.
Fig. 1

Lamport one-time signature scheme example

2.5.2 Winternitz One-Time Signature Scheme

Winternitz proposed an improvement to Lamport’s one-time signature scheme, reducing the size of the public and private keys. This scheme (W-OTS) was first mentioned in [57]. W-OTS uses a one-way function
$$\displaystyle{f:\{ 0,1\}^{n} \rightarrow \{ 0,1\}^{n}}$$
and a cryptographic hash function
$$\displaystyle{g:\{ 0,1\}^{{\ast}}\rightarrow \{ 0,1\}^{n},}$$
where n is a positive integer. W-OTS uses a parameter w which is the number of bits to be signed simultaneously. Larger values for w result in smaller signature keys and longer times for signing and verification. A comparative analysis of the running times and key sizes in terms of parameter w is found in [29].
W-OTS Key Pair Generation Given parameter \(w \in \mathbb{N}\), the private key is
$$\displaystyle{X = (x_{0},\ldots,x_{t-1}) \in _{\mathcal{R}}\{0,1\}^{(n,t)},}$$
where the xi are chosen uniformly at random. The size t is computed as \(t = t_{1} + t_{2}\), where
$$\displaystyle{t_{1} =\Bigg \lceil \frac{n} {w}\Bigg\rceil,\qquad t_{2} =\Bigg \lceil \dfrac{\lfloor \log _{2}t_{1}\rfloor + 1 + w} {w} \Bigg\rceil.}$$
The verification key
$$\displaystyle{Y = (y_{0},\ldots,y_{t-1}) \in \{ 0,1\}^{(n,t)}}$$
is computed by applying the one-time function f to each element of the signature key 2w − 1 times:
$$\displaystyle{y_{i} = f^{2^{w}-1 }(x_{i}),\qquad for\quad i = 0,\ldots,t - 1.}$$
In order to minimize storage requirements, the use of the pseudorandom number generator (PRNG) described in [65] is suggested [20]. This PRNG enables the recovery of all signature keys based only on the initial seed SEED0, \(\mathit{SEED}_{\mathrm{in}} \rightarrow (\mathit{RAND},\mathit{SEED}_{\mathrm{out}})\).
Figure 2 shows an example of key pair generation in the Winternitz signature scheme, using a PRNG and a one-way function. The PRNG computes \((\mathit{SEED}_{x}) \rightarrow (x_{i},\mathit{SEED}_{x})\). This scheme produces smaller signature keys than Lamport’s, but it increases the number of one-way function evaluations from 1 to 2w − 1 for each element of the key signature.
Fig. 2

Winternitz key pair generation

W-OTS Signature Generation To generate the signature, first compute the message digest \(d = g(M) = (d_{0},\ldots,d_{n-1})\). If necessary, add zeros to the left of d, so as to make the bitlenght of d divisible by w. Then, d is split into t1 binary blocks of size w, resulting in \(d = (m_{0}\vert \vert \ldots \vert \vert m_{t_{1}-1})\), where | | denotes concatenation. The mi blocks are represented as integers in {0, 1, , 2w − 1}. Now, a checksum c is computed as
$$\displaystyle{c =\sum _{ i=0}^{t_{1}-1}(2^{w} - m_{ i}).}$$
Since c ≤ t12w, the length of the binary representation of c is less than \(\lfloor \log _{2}t_{1}2^{w}\rfloor + 1 = \lfloor \log _{2}t_{1}\rfloor + w + 1\). If necessary, add zeros to the left of c in order to make the bitlength of string c divisible by w. Then, the extended string c can be divided into t2 blocks \(c = (c_{0}\vert \vert \ldots \vert \vert c_{t_{2}-1})\) of length w. Let b = d | | c be the concatenation of the extended string d with the extended string c. Thus, \(b = (b_{0}\vert \vert b_{1}\vert \vert \ldots \vert \vert b_{t-1}) = (m_{0}\vert \vert \ldots \vert \vert m_{t_{1}-1}\vert \vert c_{0}\vert \vert \ldots \vert \vert c_{t_{2}-1})\). The signature is computed as
$$\displaystyle{\mathit{Sig} = (\mathit{sig}_{0},\ldots,\mathit{sig}_{t-1}) = (f^{b_{0} }(x_{0}),f^{b_{1} }(x_{1}),\ldots,f^{b_{t-1} }(x_{t-1})).}$$
W-OTS Verification To verify signature \(\mathit{Sig} = (\mathit{sig}_{0},\ldots,\mathit{sig}_{t-1})\) of message M, first calculate (b0, , bt−1) in the same way it was computed during signature generation; then, compute
$$\displaystyle{\mathit{sig}_{i}^{{\prime}} = f^{2^{w}-1-b_{ i}}(\mathit{sig}_{i}),\qquad for\quad i = 0,\ldots,t - 1.}$$
Finally, check whether
$$\displaystyle{\mathit{Sig} = (\mathit{sig}_{0}^{{\prime}},\ldots,\mathit{sig}_{ t-1}) = Y = (y_{0},\ldots,y_{t-1}).}$$
If the signature is valid, then \(\mathit{sig}_{i} = f^{b_{i}}(x_{i})\), and therefore,
$$\displaystyle{f^{2^{w}-1-b_{ i}}(\mathit{sig}_{i}) = f^{2^{w}-1 }(x_{i}) = y_{i}}$$
holds for \(i = 0,1,\ldots,t - 1\).

2.6 Merkle Digital Signature Scheme

In the Merkle digital signature scheme described below, the one-time signing key and the verification key are the leaves of the tree, and the public key is the root. A tree with height H and 2H leaves will have 2H one-time key pairs (public and private).

2.6.1 Merkle Key Generation

For the generation of the Merkle public key (pub), which corresponds to the root of the Merkle tree, one must first generate 2H one-time key pairs (public and private), for each leaf of the Merkle tree.

One-Time Key Pair Generation A one-time signature algorithm generates private keys X[u] and public keys Y [u], for each leaf of the Merkle tree, \(u = 0,\ldots,2^{H} - 1\). Algorithm 2.1 describes the process of one-time key pair generation.

Algorithm 2.1 Winternitz one-time key pair generation (Leafcalc) [56]

Merkle Public Key Generation (Pub) Algorithm 2.2 generates the Merkle tree public key pub. The input values are the initial leaf leafIni and tree height H. Each leaf node node[u] of the tree receives the corresponding verification key Y [u]. The inner nodes of the Merkle tree node[parent] contain the hash value of the concatenation of their left and right children, node[left] and node[right], respectively. Each time a leaf u is calculated and stacked in stackNode, the algorithm checks if the nodes at the top of the stackNode have the same height. If the nodes have the same height, the two nodes will be unstacked and the hash value of their concatenation will be pushed into stackNode. The algorithm terminates when the root of the tree is found.

Algorithm 2.2 Merkle public key generation (CalcRoot) [56]

Figure 3 shows the order in which the nodes are stacked on the tree according to Algorithm 2.2. The nodes in gray represent the nodes that have already been generated. For example, the 4th node generated (leaf u = 2) received Y [2]. The 3rd node is the hash result of the concatenation of the nodes 1 and 2.
Fig. 3

Merkle public key generation (pub)

2.6.2 MSS Signature Generation

Scheme MSS allows the generation of 2H signatures for a tree of height H. Suppose we want to sign M[u] messages, for \(u = 0,..,2^{H} - 1\). Each message M[u] is signed with the one-time signature key X[u] resulting in a signature Sig[u].

An authentication path Aut is used to store the nodes in the path needed to authenticate leaf Y [u], eliminating the need for sending the whole tree to the receiver.

The Merkle signature SIG consists of one-time signature Sig[u] for leaf u, the corresponding verification key Y [u], the index u (index leaf), and its authentication path, \(\mathit{Aut} = (\mathit{Aut}[0],..,\mathit{Aut}[H - 1])\). Therefore, the signature is
$$\displaystyle{\mathit{SIG} = (u,\mathit{Sig}[u],Y [u],(\mathit{Aut}[0],\ldots,\mathit{Aut}[H - 1])).}$$
The Classic Authentication Path Algorithm The classic authentication path algorithm (Path Regeneration Algorithm) [56] computes node authentication path Aut for each tree leaf, needed to authenticate public key pub of the Merkle tree. This algorithm uses two stack variables, Aut and Aux. Stack Aut contains the path of current authentication and stack Aux saves the next authentication nodes that will be needed. Aut is formed by right siblings at each level of the authentication path connecting the leaf to the root of the Merkle tree.
We now describe the computation of authentication paths. The first authentication path is generated during the execution of Algorithm 2.2. The next authentication path is generated if a new signature is required. In Fig. 4, the nodes in gray show the first authentication path Aut for leaf u = 0.
Fig. 4

Implementation of Algorithm 2.2 with the first authentication path

Output and Update Phases Algorithm 2.3 shows the steps for producing the authentication path for the next leaf u in the tree. The algorithm starts by signing leaf u = 0; then, the leaf is updated in one unit, and the next authentication path is computed efficiently since only the nodes that change in the path will be updated.

Algorithm 2.3 The Path Regeneration Algorithm [56]

Algorithm 2.3 updates authentication nodes by executing function
$$\displaystyle{\mathit{CalcRoot}(\mathit{node}_{\mathrm{Ini}},h,\mathit{SEED}_{\mathrm{in}}).}$$
Function CalcRoot executes Algorithm 2.2 for node nodeIni. After 2h rounds, the value of the selected node will be computed.

2.6.3 MSS Signature Verification

The signature verification consists of two steps [12, Chapter “Hash-based Digital Signature Schemes” by J. Buchmann, E. Dahmen and M. Szydlo]: in the first, signature Sig is verified using the one-time verification key Y [i] and the underlying one time algorithm; in the second step, the public key of the Merkle tree is validated, and the receiver calculates its authentication path, reconstructing the path (p[0], , p[h]) from leaf i to the root, for all heights h. Index i is used to decide the order in which the authentication path is reconstructed. Initially, for leaf i, p[0] = g(Y [i]). For each h = 1, 2, , H, p[h] is computed using the condition (if \(\lfloor i/(2^{h-1})\rfloor \equiv 1\ \mathrm{mod}\ 2\)) and the recursive formula
$$\displaystyle{p[h] = \left \{\begin{array}{ll} g(\mathit{Aut}[h - 1]\|p[h - 1])&\mbox{ if}\ \lfloor i/(2^{h-1})\rfloor \equiv 1\ \mathrm{mod}\ 2; \\ g(p[h - 1]\|\mathit{Aut}[h - 1])&\mbox{ otherwise.} \end{array} \right.}$$
Finally, if value p[H] is equal to the public key pub, the signature is valid.

2.7 CMSS: An Improved Merkle Signature Scheme

The CMSS scheme [20] is a variation of the MSS scheme which allows the increase of the number of signatures from 220 to 240. In addition, CMSS reduces key pair generation time, signature generation time, and private key size. In [20] it was demonstrated that CMSS is competitive in practice, by presenting a highly efficient implementation within the Java Cryptographic Service Provider FlexiProvider and showing that the implementation can be used to sign messages within Microsoft Outlook.

In the CMSS scheme, two MSS authentication trees are used, a subtree and a main tree, each one with 2h leaves, where \(h = H/2\). Thus, we increase the number of signatures in relation to MSS. Note that MSS becomes impractical for H > 25 since private keys are too large and the key pair generation takes too much time. For example, to generate 220 signature keys, two trees with 210 leaves are generated with CMSS, while with MSS, a single tree with 220 leaves is constructed. Therefore, key generation time is reduced.

In order to improve signature generation time, CMSS uses Szydlo’s algorithm [89], which is more efficient for constructing authentication paths. This algorithm was implemented in [24], in which the purpose is to balance the number of calculated leaves in each authentication path.

As for reducing the private key size, a pseudo-number random generator PRNG [65] is used, where only the seed of the PRNG is stored. By using a hash function of n bits and the Winternitz parameter t, the signature key will have (t ⋅ n) bits. Thus, one needs only to store a seed of n bits.

The CMSS public key is the root of the main tree. The messages are signed using the leaves of the subtree. After the first 2h signatures have been generated, a new subtree is constructed and used to generate the next 2h signatures.

CMSS Key Generation For key pair generation, the MSS key pair generation is called twice. The subtree and its first authentication path are generated. Then, the main tree and its first authentication path are computed.

The CMSS public key is the root of the main tree. CMSS uses the Winternitz one time signature scheme. Figure 5 shows the CMSS scheme.
Fig. 5

CMSS signature scheme

CMSS Signature GenerationCMSS signature generation is carried out in various parts. First, the one-time signature of the message is computed using the leaf of subtree. After that, the one-time signature of the root of the subtree is computed using the leaf of the main tree. This signature will be recalculated in the next signature only if all the leaves of the current subtree have already been used. Then, the authentication path of both trees (main and subtree) is appended in the signature and the next authentication paths are computed. Thus, the next subtree is partially constructed, and the CMSS private key is updated.

CMSS Verification To verify the CMSS signature, it is required the checking of the roots of both subtrees and both one-time signatures.

2.8 GMSS: Merkle Signatures with Virtually Unlimited Signature Capacity

The GMSS scheme was published in 2007 [23]. It is another variation of the Merkle signature scheme, which allows a virtually unlimited 280 number of messages to be signed with one key pair. The basic construction of GMSS consists of a tree with T layers (subtrees). Subtrees in different layers may have different heights. To reduce the cost of signature time, GMSS distributes the cost of one signature generation across previous signatures and key generation. Thus, this scheme allows for the choice of different parameters w of Winternitz in different subtrees, in order to produce smaller signatures.

GMSS Key Generation For each subtree, the one-time key generation algorithm computes the signature keys and Algorithm 2.2 calculates the roots. The first authentication path of each subtree is stored during generation of the root. Then, the signatures Sigτ of Merkle subtrees, for τ = 2, , T, will be computed to be used in the first signature. Since those signature values change less frequently for the upper layers, the precomputation can be distributed over many stages, resulting in a significant improvement of the signing speed. To ensure small size of private keys, only the seed of the PRNG needs to be stored.

GMSS Signature Generation The root of a subtree is signed with the one-time signature key corresponding to the parent tree. Rootτ denotes the root of the tree τ. Sigτ denotes the one-time signature of Rootτ, which is generated using the leaf l of parent τ. The message digest d is signed using the leaves on the deepest layer T.

The number of messages that can be signed with a GMSS key is \(S = 2^{h_{1}+\ldots +h_{T}}\), where h1, … hT are the heights of the subtrees. The GMSS signature consists of:
  • the index leaf s;

  • the one-time signatures Sigd and \(\mathit{Sig}_{\tau _{i,j_{ i}}}\) for i = 2, , T, \(j = 0,\ldots,2^{h_{1}+\ldots +h_{i-1}} - 1\).

  • authentication paths \(\mathit{Aut}[\tau _{i,j_{i}},l_{i}]\) of leaves li, for i = 1, , T, \(j = 0,\ldots,2^{h_{1}+\ldots +h_{i-1}} - 1\).

During the signature generation roots \(\mathit{Root}_{\tau _{ i,1}}\) are also calculated, as are the authentication paths \(\mathit{Aut}[\tau _{i,1},0]\) of trees τi, 1, for i = 2, , T. The signature generation is split into two parts. The first, online part, computes Sigd. The second, offline part, precomputes the authentication paths and one-time signatures of the roots required for upcoming signatures.

GMSS Verification The GMSS signature verification is essentially the same as that of schemes MSS and CMSS: the verifier checks the one-time signatures Sigd and \(\mathit{Sig}_{\tau _{ i,j_{i}}}\) for i = 2, , T and \(j = 0,\ldots,2^{h_{1}+\ldots +h_{i-1}} - 1\). Therefore, she verifies the roots Rootτ for τ = 2, , T, and the public key using the corresponding authentication path.

2.9 XMSS: eXtended Merkle Signature Scheme

The hash-based signature scheme XMSS [22] is a variation of MSS, and it was the first practical forward secure signature with minimal security requirements. This scheme uses a function family F and a hash function family G. XMSS is efficient, provided that G and F are efficient. The parameters of XMSS are \(n \in \mathbb{N}\), the security parameter; \(w \in \mathbb{N}(w > 1)\), the Winternitz parameter; \(m \in \mathbb{N}\), the message length; \(H \in \mathbb{N}\), the tree height; the one-time signature keys x ∈ { 0, 1}n, chosen randomly with uniform distribution; a function family
$$\displaystyle{F_{n} =\{ f_{K}:\{ 0,1\}^{n} \rightarrow \{ 0,1\}^{n}\vert K \in \{ 0,1\}^{n}\};}$$
and a hash function gK, chosen randomly with uniform distribution from the family
$$\displaystyle{G_{n} =\{ g_{K}:\{ 0,1\}^{2n} \rightarrow \{ 0,1\}^{n}\vert K \in \{ 0,1\}^{n}.\}}$$

The one-time signature key x is used to construct the one-time verification y, by applying the function family Fn. In [22] the family function used was \(f_{K}(x) = g(\mathit{Pad}(K)\vert \vert \mathit{Pad}(x))\), for a key K ∈ { 0, 1}n, x ∈ { 0, 1}n. \(\mathit{Pad}(z) = (z\vert \vert 10^{b-\vert z\vert -1})\), for | z |  < b, where b is the size of the hash function block.

The XMSS scheme uses a slightly modified version of the WOTS proposed in [21]. This modification makes collision resistance unnecessary: the iterated evaluations of a hash function is replaced by a random walk through the function family Fn, as follows: for K, x ∈ { 0, 1}n, e ∈ \(\mathbb{N}\), and fK ∈ Fn, the function fKe(x) is fK0(x) = K. For e > 0, the function is \(f_{K}^{e}(x) = f_{K^{{\prime}}}(x)\), where \(K^{{\prime}} = f_{K}^{e-1}(x)\).

Modified WOTS Key Pair Generation First compute the Winternitz parameters
$$\displaystyle{l_{1} =\Bigg \lceil \frac{m} {\log _{2}(w)}\Bigg\rceil,\qquad l_{2} =\Bigg \lfloor \frac{\log _{2}(l_{1}(w - 1))} {\log _{2}(w)} \Bigg\rfloor + 1,\qquad l = l_{1} + l_{2}.}$$
The public verification key is
$$\displaystyle{Y = (y_{1},\ldots,y_{l}) = (f_{\mathrm{sk}_{1}}^{w-1}(x),\ldots,f_{\mathrm{ sk}_{l}}^{w-1}(x)),}$$
where ski is the private signature key chosen uniformly at random and fw−1 as defined above.
Modified WOTS Signature Generation This scheme signs messages of binary length m. The message bits are processed in base w representation. The message is M = (m1,…,\(m_{l_{1}})\), mi ∈ { 0, . . . , w − 1}. The checksum \(C =\sum _{ i=1}^{l_{1}}(w - 1 - m_{i})\) in base w representation, and the length l2 are appended to M, resulting in \(b = (b_{1},\ldots,b_{l})\). The signature is
$$\displaystyle{\mathit{Sig} = (\mathit{sig}_{1},\ldots,\mathit{sig}_{l}) = (f_{\mathrm{sk}_{1}}^{b_{1} }(x),\ldots,f_{\mathrm{sk}_{l}}^{b_{l} }(x)).}$$
Modified WOTS Verification To check the signature, the verifier constructs the values b = (b1, , bl) as in the signature generation and then checks the equality
$$\displaystyle{(f_{\mathrm{sig}_{1}}^{w-1-b_{1} }(x),\ldots,f_{\mathrm{sig}_{l}}^{w-1-b_{l} }(x)) = (y_{1},\ldots,y_{l}).}$$
XMSS Public Key GenerationXMSS is a modification of the Merkle tree. A tree of height H has H + 1 levels. XMSS uses the hash function gK and bitmasks (bitmaskTree) \((b_{l,j}\vert \vert b_{r,j}) \in \{ 0,1\}^{2n}\), chosen uniformly at random, where bl, j is the left bitmask and br, j is the right bitmask. The nodes on level j, 0 ≤ j ≤ H, are written NODEi, j, 0 ≤ i < 2Hj, and 0 < j ≤ H. The nodes are computed as
$$\displaystyle{\mathit{NODE}_{i,j} = g_{K}((\mathit{NODE}_{2i,j-1} \oplus b_{l,j})\vert \vert (\mathit{NODE}_{2i+1,j-1} \oplus b_{r,j})).}$$
The bitmasks are the main difference to the other Merkle tree constructions, since they allow one to replace the collision resistant hash function family. Observe, in Fig. 6, how the tree nodes NODEi, j in the XMSS scheme are constructed at each level j, to generate the public key of the tree.
Fig. 6

XMSS signature scheme [21]

To generate a leaf of the XMSS tree, an L-tree is used. The one-time public verification keys \((y_{1},\ldots,y_{l})\) are the first l leaves of an L-tree. If l is not a power of 2, then there are not sufficiently many leaves. A node that has no right sibling is lifted to a higher level of the L-tree until it becomes the right sibling of another node. The hash function uses new bitmasks (bitmaskLtree). The bitmaskLtree are the same for each of those trees. The XMSS public key contains the bitmaskTree, bitmaskLtree, and the root of the XMSS tree.

2.10 Security of the Hash-Based Digital Signature Schemes

In this section we present the main results known about the security of hash-based digital signature schemes.

In [12, Chapter “Hash-based Digital Signature Schemes” by J. Buchmann, E. Dahmen and M. Szydlo], it was proved that the Lamport-Diffie one-time signature scheme has existential unforgeability under an adaptive chosen message attack (CMAsecure), assuming that the underlying one-way function is pre image resistant. In the same work, it was also proved that the Merkle signature scheme has existential unforgeability under the assumption that the hash function is collision resistant and the underlying one-time signature scheme has existential unforgeability.

On the security of XMSS, in [21], it was proved the following result: if Hn is a second preimage-resistant hash function family and Fn a pseudorandom function family, then XMSS is existentially unforgeable under chosen message attacks. In addition, in the same paper, it was shown that XMSS is forward secure under some modifications on the key generation process.

Hülsing [44] showed that W-OTS is existentially unforgeable under adaptive chosen message attacks. In the same work it was also shown that scheme XMSSMT is secure; more specifically, it is proved the following result: if Hn is a second-preimage-resistant hash function family and Fn is a pseudorandom function family, then XMSSMT is a forward secure signature scheme.

2.11 Implementation Results

In this section we present a summary of recent works on the implementation of variants of the Merkle signature scheme.

We use the following notation: time to generate keys (tkey), time to generate a signature (tsig), and time to verify a signature (tver). Table 1 shows timings which were obtained in the following works:
  • CMSS scheme [20] software implementation on a Pentium M 1. 73 GHz, 1 GB of RAM running Microsoft Windows XP for 240 signatures and w = 3;

  • GMSS scheme [23] software implementation on a Pentium computer dual core 1. 8 GHz for 240 signatures (w1 = 9 and w2 = 3 were 390 min, 10. 7 ms and 10. 7 ms);

  • XMSS scheme [22] software implementation on an Intel(R) Core (TM) i5 M540, 2. 53GHz computer with Infineon technology;

  • CMSS scheme [85] hardware implementation on a novel architecture on an FPGA Platform (Virtex-5);

  • XMSS scheme [66] software implementation on an Intel Core i7 − 2670 QMCPU, 2. 20 GHz with 6 GB of RAM.

Table 1

Implementation results

Schemes

Hash

H

w

tkey

tsig (ms)

tver (ms)

 

CMSS [06]

SHA2

40

(3,3)

120.7 min

40.9

3.7

 

GMSS [07]

SHA1

40

(9,3)

390 min

10.7

10.7

 

XMSS [11]

SHA2

20

4

408.6 s

6.3

0.51

 

CMSS [11]

SHA2

30

4

820 ms

2.7

1.7

 

XMSS [13]

SHA2

20

4

553 s

2.7

0.31

 

In Table 1 the size of all public keys is 32 bytes, except for the XMSS scheme, that also has to store the bitmasks. The private key and signature are smaller in the XMSS scheme, since in the other schemes it is necessary to store information of more than one tree. The XMSS scheme presented the best timings for signing and verification on a software implementation, given that only one authentication path needs to be updated and checked for each signature. However, the XMSS is only recommended for applications requiring up to 220 signature keys, since the generation of more keys is too time consuming. A Multi Tree XMSS (XMSSMT) [79] based on algorithms CMSS and GMSS is recommended for applications that require a large numbers of signatures.

3 Multivariate Schemes

Multivariate public key cryptosystems (MPKC) constitute one of the main public key families considered potentially resistant against the powerful quantum computers. The security of MPKC schemes is based upon the difficulty of solving nonlinear system of equations over finite fields. In particular, in most cases, such schemes are based upon multivariate systems of quadratic equations because of computational advantages. This last problem is known as multivariate quadratic problem or \(\mathcal{M}\mathcal{Q}\) Problem, and it was shown to be NP-complete by Patarin [69]. MPKC has been developed more intensively in the last two decades. It was shown that, in general, encryption schemes were not as secure as it was believed to be, while signatures constructions can be considered viable.

The idea behind MPKC is to define a trapdoor one-way function whose image is a nonlinear system of multivariate equations over a finite field. The public key is given by a set of polynomials:
$$\displaystyle{\mathcal{P} =\{ p_{1}(x_{1},\ \ldots,\ x_{n}),\cdots \,,p_{m}(x_{1},\ \ldots,\ x_{n})\}}$$
where each pk is a nonlinear polynomial (usually quadratic) in the variables x = (x1,  ⋯ ,  xn):
$$\displaystyle{ p_{k}(x_{1},\ \ldots,\ x_{n}):=\sum _{1\leq i\leq j\leq n}P_{\mathit{ij}}^{(k)}x_{ i}x_{j} +\sum _{1\leq i\leq n}L_{i}^{(k)}x_{ i} + c^{(k)},1 \leq k \leq m }$$
(1)
and all the coefficients and variables are in \(\mathbb{F}_{q}\). In order to make the previous definition simpler, we will adopt vector notation, which is closer to practical implementations:
$$\displaystyle{ p_{k}(\mathbf{x}):= \mathbf{x}P^{(k)}\mathbf{x}^{\mathop{\mathrm{T}}\nolimits } + L^{(k)}\mathbf{x}^{\mathop{\mathrm{T}}\nolimits } + c^{(k)},1 \leq k \leq m }$$
(2)
where \(P^{(k)} \in \mathbb{F}_{q}^{n\times n}\) is a n × n matrix, whose entries are the quadratic terms coefficients of \(p_{k}(x_{1},\ \ldots,\ x_{n})\), \(L^{(k)} \in \mathbb{F}_{q}^{n}\) is a vector whose entries are the linear terms coefficients of \(p_{k}(x_{1},\ \ldots,\ x_{n})\) and c(k) denotes the constant term of \(p_{k}(x_{1},\ \ldots,\ x_{n})\). Finally, x is the row vector of variables \([x_{1},\ \ldots,\ x_{n}]\). Figure 7 illustrates the pure quadratic transformation (or map) \(\mathbf{x} P^{(k)} \mathbf{x}^{p}\) (whose evaluation provides a certain element of the finite field denoted by \(h_{k} \in \mathbb{F}_{q}\)).
Fig. 7

Pure quadratic map or transform

A formal definition for the \(\mathcal{M}\mathcal{Q}\) Problem is given as follows.

Definition 1 (\(\mathcal{M}\mathcal{Q}\) Problem)

Solve the random system \(p_{1}(\mathbf{x}) = p_{2}(\mathbf{x}) = \cdots = p_{m}(\mathbf{x}) = 0\), where each pi is quadratic in variables \(\mathbf{x} = (x_{1},\ \ldots,\ x_{n})\). All coefficients and variables are in \(K = \mathbb{F}_{q}\), the field with q elements.

In other words, the target of the \(\mathcal{M}\mathcal{Q}\) Problem is to find a solution x for a given map \(\mathcal{P}\). In 1979, Garey and Johnson proved [33, page 251] that the decision variant of the \(\mathcal{M}\mathcal{Q}\) Problem over binary finite fields is NP-complete.

On the other hand, the proposed \(\mathcal{M}\mathcal{Q}\) signature schemes in literature do not rely their security only on the original \(\mathcal{M}\mathcal{Q}\) Problem. In order to invert the trapdoor one-way function, which means finding the original private system (or an equivalent), it is necessary to solve a related problem called the Isomorphism of Polynomials Problem or IP Problem, proposed by Patarin [70].

Definition 2 (Isomorphism of Polynomials Problem)

Let \(m,n \in \mathbb{N}\) be arbitrarily fixed. Further denote \(\mathcal{P},\mathcal{Q}: \mathbb{F}_{q}^{n} \rightarrow \mathbb{F}_{q}^{m}\) two multivariate quadratic maps and \(\mathbb{T} \in \mathbb{F}_{q}^{mxm}\), \(\mathbb{S} \in \mathbb{F}_{q}^{nxn}\) two bijective linear maps, such that \(\mathcal{P} = \mathbb{T} \circ \mathcal{Q}\circ \mathbb{S}\). Given \(\mathcal{P}\) and \(\mathcal{Q}\), find \(\mathbb{T}\) and \(\mathbb{S}\).

In other words, the IP Problem goal is to find \(\mathbb{T}\) and \(\mathbb{S}\) for a given pair \((\mathcal{P},\mathcal{Q})\). Note that, originally, \(\mathbb{S}\) was defined as an affine instead of linear transformation [71]. But, Braeken et al. [18, Sec. 3.1] noticed that the constant part is not important for the security of certain \(\mathcal{M}\mathcal{Q}\) schemes and thus can be omitted.

3.1 Construction of \(\mathcal{M}\mathcal{Q}\) Keys

Generically, a typical \(\mathcal{M}\mathcal{Q}\) private key consists of two linear transformations \(\mathbb{T}\) and \(\mathbb{S}\) along with a quadratic transformation \(\mathcal{Q}\). Note that \(\mathcal{Q}\) presents certain particular trapdoor structure. We will present two distinct trapdoor structures in Sect. 3.2 for the UOV and Rainbow signature schemes. The trapdoor structure will allow the signer to easily solve the public \(\mathcal{M}\mathcal{Q}\) system in order to generate valid signatures. The public key is simply given by the composition \(\mathcal{P} = \mathbb{T} \circ \mathcal{Q}\circ \mathbb{S}\). For some signature schemes it is not necessary to explicitly use the map \(\mathbb{T}\), since it is reduced to the identity [12, Chapter 6].

The main difference among distinct \(\mathcal{M}\mathcal{Q}\) signature schemes falls in the trapdoor structure of \(\mathcal{Q}\). Since public keys have the same structure in most schemes, verifying a signature follows the same procedure, in other words checking if a given signature x is a solution of a public quadratic system \(p_{k}(\mathbf{x}) = h_{k},1 \leq k \leq m\). For other trapdoor constructions, the reader can see, for example, [94].

It is worth to mention an obvious optimization in the public matrices P(k) defined over odd characteristic fields that provides a reduction by a factor about two in the space representation. From the definition of the summation of the quadratic part of \(p_{k}(x_{1},\ \ldots,\ x_{n})\) (Eq. 1), the coefficient of the term xixj is \(P_{\mathit{ij}}^{(k)} + P_{\mathit{ji}}^{(k)}\); thus, one can update the coefficient Pij(k) of the P(k) with the value \(P_{\mathit{ij}}^{(k)} + P_{\mathit{ji}}^{(k)}\) and the coefficient Pji(k) with zero for i ≤ j ≤ n, which makes the matrix P(k) upper triangular. After applying this representation one is able to define a unique public matrix called the public matrix of coefficients, denoted MP. Each row of MP is given by the linearization of the coefficients of each upper triangular matrix P(k). Figure 8 illustrates this construction.
Fig. 8

Public matrix of coefficients

3.2 UOV and Rainbow \(\mathcal{M}\mathcal{Q}\) Signatures

One of the main still secure \(\mathcal{M}\mathcal{Q}\) signature families is the Unbalanced Oil and Vinegar (UOV) construction which was proposed by Patarin [48]. The name Oil and Vinegar came from the fact that variables (x1, ⋯ , xn) of a certain quadratic private system are separated in two subsets \(O = (x_{1},\cdots \,,x_{m})\) and \(V = (x_{m+1},\cdots \,,x_{n})\), in such a way that variables of the first set are never mixed in a term of the quadratic system.

Formally, the trapdoor consists of a purely quadratic map, called the central map, \(\mathcal{Q}: \mathbb{F}^{n} \rightarrow \mathbb{F}^{m}\) with
$$\displaystyle{\mathcal{Q} =\{ f_{1}(u_{1},\ \ldots,\ u_{n}),\ldots,f_{m}(u_{1},\ \ldots,\ u_{n})\}}$$
and
$$\displaystyle{ f_{k}(u_{1},\ldots,u_{n}):=\sum _{1\leq i\leq j\leq n}Q_{\mathit{ij}}^{(k)}u_{ i}u_{j} \equiv uQ^{(k)}u^{\mathop{\mathrm{T}}\nolimits } }$$
(3)
The central map has an additional restriction in its polynomials \(f_{k}(u_{1},\ldots,u_{n})\). It is imposed that a certain part of its coefficients be zeros. The set of variables u is divided in two subsets: the one of vinegar variables ui with \(i \in V =\{ 1,\cdots \,,v\}\) and the one of oil variables ui with \(i \in O =\{ v + 1,\cdots \,,n\}\) of \(m = n - v\) elements. The restriction on the polynomials fk is that they have no term combining any two oil variables. That assures that we do not have quadratic (or crossed) terms in oils. Thus, we only have terms combining the following sort of variables v × v and o × v. Patarin showed that given this construction one can fix arbitrary values for the vinegars and then get a linear system in the oils. This remaining linear system will have a solution with high probability, i.e., \(1 - 1/q\), and can be solved using Gaussian elimination with complexity \(\mathcal{O}\left (n^{3}\right )\). The structure of the private polynomials is the following:
$$\displaystyle{ f_{k}(u_{1},\cdots \,,u_{n}):=\sum _{ i,j\in V,\;i\leq j}^{}Q_{\mathit{ij}}^{(k)}u_{ i}u_{j} +\sum _{ i\in V,\;j\in O}^{}Q_{\mathit{ij}}^{(k)}u_{ i}u_{j} }$$
(4)

In order to generate a signature \(\mathbf{x} \in \mathbb{F}_{q}^{n}\) of a given message, particularly of its hash \(h \in \mathbb{F}_{q}^{m}\), the signer have to invert the map \(P(\mathbf{x}) = Q(S(\mathbf{x})) = h\). Defining \(\mathbf{x^{{\prime}}} = \mathbf{x} \cdot S\), one first solves the multivariate system, \(\mathbf{x^{{\prime}}}Q^{(k)}\mathbf{x^{{\prime}}}^{\mathop{\mathrm{T}}\nolimits } = h_{k}\), 1 ≤ k ≤ m, finding x. Finally, the signature \(\mathbf{x} = \mathbf{x^{{\prime}}}S^{-1}\) is computed.

As explained before, the structure of the matrices Q(k) allows to efficiently solve the \(\mathcal{M}\mathcal{Q}\) system, by choosing v vinegar variables at random and then solving the resulting system for the remaining m oil variables. If the linear system has no solution, repeat the process by choosing new vinegar variables until it has a valid solution.

A signature x for h is valid, if and only if, all polynomials pk constituting the public key have their evaluation satisfied, i.e., \(p_{k}(x_{1},\cdots \,,x_{n}) = \mathbf{x}P^{(k)}\mathbf{x}^{T} = h_{k}\), 1 ≤ k ≤ m. The consistency of the verification \(P(\mathbf{x})\mathop{ =}\limits^{?}h\) is shown next:
$$\displaystyle\begin{array}{rcl} p(\mathbf{x})& =& \mathbf{x}P\mathbf{x}^{\mathop{\mathrm{T}}\nolimits } {}\\ & =& \mathbf{x}(Q \circ S)\mathbf{x}^{\mathop{\mathrm{T}}\nolimits } {}\\ & =& \mathbf{x}(\mathit{SQS}^{\mathop{\mathrm{T}}\nolimits })\mathbf{x}^{\mathop{\mathrm{T}}\nolimits } {}\\ & =& (\mathbf{x^{{\prime}}}S^{-1})(\mathit{SQS}^{\mathop{\mathrm{T}}\nolimits })(\mathbf{x^{{\prime}}}S^{-1})^{\mathop{\mathrm{T}}\nolimits } {}\\ & =& \mathbf{x^{{\prime}}}(S^{-1}S)Q(S^{\mathop{\mathrm{T}}\nolimits }(S^{-1})^{\mathop{\mathrm{T}}\nolimits })\mathbf{x^{{\prime}}}^{\mathop{\mathrm{T}}\nolimits } {}\\ & =& \mathbf{x^{{\prime}}}IQI\mathbf{x^{{\prime}}}^{\mathop{\mathrm{T}}\nolimits } {}\\ & =& \mathbf{x^{{\prime}}}Q\mathbf{x^{{\prime}}}^{\mathop{\mathrm{T}}\nolimits } {}\\ & =& h. {}\\ \end{array}$$
Historically, UOV signatures came from Oil and Vinegar or OV construction [68], where the number of vinegars and oils are the same (balanced oil and vinegar), but that construction was shown insecure [47]. Next, it was redesigned a way to make it secure by unbalancing the amount of each subset (v > m), what originated the Unbalanced Oil and Vinegar (UOV) signature [48]. Figure 9 illustrates the structure of each UOV private polynomial.
Fig. 9

UOV central map

In order to hide the trapdoor structure at polynomials fk, an invertible linear transformation \(\mathbb{S} \in \mathbb{F}_{q}^{n\times n}\) is applied to the right of \(\mathcal{Q}\). So the resulting public map is \(\mathcal{P} = \mathcal{Q}\circ \mathbb{S}\). The private key is given by the pair \(\mathit{sk}:= (\mathcal{Q}, \mathbb{S})\) and the public key is composed by polynomials \(\mathcal{P}:= p(x_{1},\cdots \,,x_{n}) =\{ p_{1}(x_{1},\cdots \,,x_{n}),\cdots \,,p_{m}(x_{1},\cdots \,,x_{n})\}\). So, it becomes clear that the security of the system is not directly based on the \(\mathcal{M}\mathcal{Q}\) Problem and indeed recovering the private key is related to the difficult to decompose \(\mathcal{P}\) in \(\mathcal{Q}\) and \(\mathbb{S}\), in other words, to solve the IP Problem.

An important variant of the UOV scheme is the Rainbow [28] signature. It was proposed by Ding and Schmidt, whose main advantage is the shorter signature footprints attained compared to UOV [91, Section 3].

The basic idea of the Rainbow signature is to separate the m private UOV equations into smaller bands and partitioning the variables accordingly; in other words, each band has its own oils and vinegars. After a band is processed, all of its variables become the vinegars for the next band and so on until the last band is processed.

Typically the central map is divided into only two bands, since this configuration has been shown the most suitable in the sense that it avoids certain structural attacks and keeps the signatures reasonably short [91].

Rainbow central map \(\mathcal{Q}\) with two bands, for example, is divided in two layers as shown in Fig. 10 where v1 and o1 are the number of vinegars and oils of the first layer and v2 and o2 are the number of vinegars and oils of the second layer. Note that \(v_{2} = o_{1} + v_{1}\).
Fig. 10

Rainbow central map \(\mathcal{Q}\) with two bands

The signature procedure is similar to UOV one, choosing vinegars at random for the first band in order to be able to compute its oils, as it is done in UOV. Then, these computed variables (vinegars plus oils) are used as vinegars for the next band.

3.3 The Cyclic UOV Signature

An interesting step towards the reduction of UOV/Rainbow key sizes was made by means of the Cyclic UOV/Rainbow constructions [74, 77]. Petzoldt et al. noticed the existence of a linear relation between part of the public quadratic map and the private quadratic map. That relation was exploited in order to construct key pairs in a different unusual way being able to reduce the size of the public key. The idea is to firstly generate the quadratic part of the public key with a desired compact structure and then compute the quadratic part of the private key by using the linear relation.

Therefore, it is possible to obtain shorter public keys while the private ones remain random looking and without any apparent structure. The structure suggested by Petzoldt et al. was the one that uses circulant matrices, which is the origin of the name Cyclic UOV [77]. Circulant matrices are very compact, since they can be represent by simply its first row. Thus, the public key can be stored in a efficient manner, apart from some advantages in processing like Karatsuba and fast Fourier transform (FFT) techniques.

Cyclic UOV keys are constructed as follows. Firstly, one generates an invertible linear transform \(\mathbb{S} \in \mathbb{F}_{q}^{n\times n}\), where \(S_{\mathit{ij}}\stackrel{\;_{\$}}{\leftarrow }\mathbb{F}_{q},1 \leq i,j \leq n\), and, from \(\mathbb{S}\), one computes the aforementioned linear relation and denoted by \(A_{\mathrm{UOV}}:=\alpha _{ \mathit{ij}}^{rs}\):
$$\displaystyle{\alpha _{\mathit{ij}}^{rs} = \left \{\begin{array}{l l} S_{ri} \cdot S_{si}\text{, i=j} \\ S_{ri} \cdot S_{sj} + S_{rj} \cdot S_{si}\text{, otherwise}. \end{array} \right.}$$
In order to illustrate how the public and private matrices of coefficients, MP and MF, are related, we have initially Figs. 11 and 12 that separate the proper parts of these matrices.
Fig. 11

Cyclic UOV: public matrix of coefficients

Fig. 12

Cyclic UOV: private matrix of coefficients

Blocks B of MP and F of MF obey the relation B: = F ⋅ AUOV(S). Thus, for the key generation, one may first generate matrix MP with B with circulant structure and then computing \(F:= B \cdot A_{\mathrm{UOV}}^{-1}(S)\). That methodology was able to reduce UOV public key size in about 6 times for the security level of 80 bits.

As mentioned above, \(\mathcal{M}\mathcal{Q}\) signatures have been developed more intensively in the last two decades. Many constructions were purposed toward key size reduction which is the main disadvantage today. Table 2 shows some of them.

Table 2

\(\mathcal{M}\mathcal{Q}\) signatures evolution

Construction

 | sk | 

 | pk | 

 | hash | 

 | sig | 

Ref.

 

Rainbow(\(\mathbb{F}_{2^{4}}\), 30, 29, 29)

 75.8 KiB

113.4 KiB

232

352

[75]

 

Rainbow(\(\mathbb{F}_{31}\), 25, 24, 24)

 59.0 KiB

77.7 KiB

232

392

[75]

 

CyclicUOV(\(\mathbb{F}_{2^{8}}\), 26, 52)

 14.5 KiB

76.1 KiB

208

624

[74]

 

NC-Rainbow(\(\mathbb{F}_{2^{8}}\), 17, 13, 13)

 25.5 KiB

66.7 KiB

384

672

[95]

 

Rainbow(\(\mathbb{F}_{2^{8}}\), 29, 20, 20)

 42.0 KiB

58.2 KiB

272

456

[75]

 

CyclicLRS(\(\mathbb{F}_{2^{8}}\), 26, 52)

 71.3 KiB

13.6 KiB

208

624

[76]

 

UOVLRS(\(\mathbb{F}_{2^{8}}\), 26, 52, 26)

 71.3 KiB

11.0 KiB

208

624

[76]

 

CyclicRainbow(\(\mathbb{F}_{2^{8}}\), 17, 13, 13)

 19.1 KiB

10.2 KiB

208

344

[74]

 

4 Code-Based Schemes

In this section we will discuss the theory and practice of cryptosystems based on error-correcting codes.

Coding theory aims at ensuring that when transmitting a collection of data over a channel subject to noise (i.e., the perturbations in the data), the recipient of this transaction can recover the original message. For this, one must find efficient ways to add redundant information to the original message such that if the message reaches the recipient containing errors (existing inversion in certain bits in case of binary messages), the receiver can decode it.

In the cryptographic context, the primitive adds errors in a word of an error-correcting code and compute a syndrome relative to the parity check matrix of this code.

The first construction was a public key encryption scheme proposed by Robert J. McEliece in 1978 [55]. The private key is a random, binary, and irreducible Goppa code (which will be reviewed in Sect. 4.1.1), and the public key is a random generator matrix with a permuted version of this code. The ciphertext is a codeword in which some errors were introduced, and only the owner of the private key can correct these errors (and thus decrypt the message). A few years later some parameter modifications were necessary to keep the security level high, but it remains unbroken until now.

4.1 Linear Codes

For a better technical understanding of this section, we first explain some basic concepts used within the code-based cryptography.

Matrix and vector indices will be numbered from 0 throughout this context, unless otherwise stated. Let p be a prime, and let q = pm for some integer m > 0. \(\mathbb{F}_{q}\) denotes the finite field with q elements. The degree of a polynomial \(g \in \mathbb{F}_{q}[x]\) is denoted by \(\deg (g)\). It is also defined the notion of Hamming weight and Hamming distance:

Definition 3

The Hamming weight of a vector \(u \in \mathcal{C}\subseteq \mathbb{F}_{q}^{n}\) is the number of nonzero coordinates on it, i.e., \(\mathop{wt}\nolimits (u) = \#\{i,\;0\leqslant i < n\mid u_{i}\neq 0\}\). The Hamming distance between two vectors \(u,v \in \mathcal{C}\subseteq \mathbb{F}_{q}^{n}\) is the number \(\mathop{dist}\nolimits (u,v)\) of coordinates that these vectors differ from each other, i.e. \(\mathop{dist}\nolimits (u,v):=\mathop{ wt}\nolimits (u - v)\).

Now we will introduce some useful concepts to the task of encoding messages. The first refers to the linear code, which can be defined as

Definition 4

A (binary) linear [n, k] error-correcting code \(\mathcal{C}\) is a subspace of \(\mathbb{F}_{2}^{n}\) of dimension k.

A vector \(u \in \mathcal{C}\) is also called codeword (or, briefly, a word) of \(\mathcal{C}\).

As a vector space, \(\mathcal{C}\) is represented by a base, which can be written as a generator matrix:
  • A generator matrix G of \(\mathcal{C}\) is a matrix over \(\mathbb{F}_{q}\) such that \(\mathcal{C} = \left \langle G\right \rangle\), where \(\left \langle G\right \rangle\) indicates the vector space generated by the rows of G. Normally the rows of G are independent and the matrix has dimension k × n; in other words, \(\exists G \in \mathbb{F}_{q}^{k\times n}: \mathcal{C} =\{ uG \in \mathbb{F}_{q}^{n}\mid u \in \mathbb{F}_{q}^{k}\}\).

  • We say that a generator matrix G is in the systematic form if its first k columns form the identity matrix.

  • The so-called dual code \(\mathcal{C}^{\perp }\) is the orthogonal code of \(\mathcal{C}\) to the scalar product over \(\mathbb{F}_{q}\) and is a linear code of dimension n × (nk) over \(\mathbb{F}_{q}\).

Alternatively, \(\mathcal{C}\) is fully featured as the core of a linear transformation specified by a parity check matrix (or abbreviated parity matrix):
  • A parity matrix H over \(\mathcal{C}\) is a generator matrix of \(\mathcal{C}^{\perp }\). In other words, \(\exists H \in \mathbb{F}_{q}^{r\times n}: \mathcal{C} =\{ v \in \mathbb{F}_{q}^{n}\mid Hv^{\mathop{\mathrm{T}}\nolimits } = 0^{r} \in \mathbb{F}_{q}^{r}\}\), where \(r = n - k\) is the codimension of \(\mathcal{C}\) (i.e., the dimension of the orthogonal space \(\mathcal{C}^{\perp }\)).

It is easy to see that G and H, although not uniquely defined (because there is no one single basis for \(\mathcal{C}\) or to \(\mathcal{C}^{\perp }\)), are related by \(HG^{\mathop{\mathrm{T}}\nolimits } = 0 \in \mathbb{F}_{q}^{r\times k}\).

The linear transformation defined by a parity matrix is called syndrome function of the code. The value of this transformation over any vector \(u \in \mathbb{F}_{q}^{n}\) is called syndrome of this vector. Clearly, the syndrome of any codeword is always null.

Definition 5

The distance (or minimum distance) of a \(\mathcal{C}\subseteq \mathbb{F}_{q}^{n}\) code is the minimum Hamming distance between words of \(\mathcal{C}\), i.e., \(\mathop{dist}\nolimits (\mathcal{C}) =\min \{\mathop{ wt}\nolimits (u)\mid u \in \mathcal{C}\}\).

We write [n, k, d] for a code [n, k] whose minimum distance is (at least) d. If d ⩾2t + 1, it is said that the code is capable of correcting at least t errors, in the sense that there is no more than one codeword with a Hamming distance no more than t from any vector of \(\mathbb{F}_{q}^{n}\).

Several computational problems involving codes are intractable, starting with the actual determination of the minimum distance of a code. The following problems are important for code-based cryptography:

Definition 6 (General Decoding)

Let \(\mathbb{F}_{q}\) be a finite field, and let (G, w, c) be a triple consisting of a matrix \(G \in \mathbb{F}_{q}^{k\times n}\), an integer w < n, and a vector \(c \in \mathbb{F}_{q}^{n}\). The general decoding problem (GDP) is the question if there is a vector \(m \in \mathbb{F}_{q}^{k}\) such that \(e = c - mG\) has Hamming weight \(\mathop{wt}\nolimits (e)\leqslant w\).

The search problem associated with the GDP is to calculate the vector m given the word with errors c.

Definition 7 (Syndrome Decoding)

Let \(\mathbb{F}_{q}\) be a finite field, and let (H, w, s) be a triple consisting of an \(H \in \mathbb{F}_{q}^{r\times n}\), an integer w < n, and a vector \(s \in \mathbb{F}_{q}^{r}\). The syndrome decoding problem (SDP) is whether there is a vector \(e \in \mathbb{F}_{q}^{n}\) with Hamming weight of \(\mathop{wt}\nolimits (e)\leqslant w\) such that \(He^{\mathop{\mathrm{T}}\nolimits } = s^{\mathop{\mathrm{T}}\nolimits }\).

The problem associated with the SDP consists in computing the error pattern e given its syndrome \(s_{e}:= eH^{\mathop{\mathrm{T}}\nolimits }\).

Both the general decoding problem and the problem of syndrome decoding for linear codes are NP-complete [9].

In contrast to the overall results, the knowledge of the structure of certain codes makes the GDP and SDP soluble in polynomial time. A basic strategy to define code-based cryptosystems is therefore keep secret the information about the structure of the code and publish a code associated without any apparent structure (hence, by hypothesis hard to decode).

4.1.1 Goppa Codes

One of the most important families of linear error-correcting codes for cryptographic purposes is the Goppa codes:

Definition 8

Given a prime number p, q = pm for some m > 0, a sequence \(L = (L_{0},\ldots,L_{n-1}) \in \mathbb{F}_{q}^{n}\) of distinct elements, and a monic polynomial \(g(x) \in \mathbb{F}_{q}[x]\) of degree t (called generator polynomial) such that g(Li) ≠ 0 to 0⩽ i < n, the Goppa code Γ(L, g) is the code \(\mathbb{F}_{p}\)-alternate corresponding to \(\mathop{GRS}\nolimits _{t}(L,D)\) over \(\mathbb{F}_{q}\), where \(D = (g(L_{0})^{-1},\ldots,g(L_{n-1})^{-1})\).

The distance of an irreducible binary Goppa code is at least 2t + 1 [43], and therefore a Goppa code can correct up to t errors (using, e.g., Patterson’s algorithm [72]), sometimes a little more [11]. Appropriate decoding algorithms can still decode t errors when the generator g(x) is not irreducible but free of squares. For example, one can see equivalently a binary Goppa code as an alternate code defined by the generator polynomial g2(x), in which case any alternate is able to decode t errors. Codes called wild extend this result under certain circumstances [14]. For all other cases, there is no known decoding method capable of correcting more than t∕2 errors.

Equivalently we can define Goppa codes in terms of its syndrome function:

Definition 9

Let \(L = (L_{0},\ldots,L_{n-1}) \in \mathbb{F}_{q}^{n}\) be a sequence (called support) of n ⩽ q distinct elements, and let \(g \in \mathbb{F}_{q}[x]\) be a monic irreducible polynomial of degree t such that g(Li) ≠ 0 for all i. For any word \(e \in \mathbb{F}_{p}^{n}\) is defined a polynomial Goppa syndrome\(s_{e} \in \mathbb{F}_{q}[x]\) as
$$\displaystyle{ s_{e}(x) =\sum _{ i=0}^{n-1} \dfrac{e_{i}} {x - L_{i}}\mod g(x). }$$
(5)

The syndrome is a linear function of e. We also present an alternative definition for Goppa codes:

Definition 10

The Goppa code [n, nmt] over \(\mathbb{F}_{p}\) supported L and generator polynomial g is the core function syndrome (Eq. 5), i.e., the set of \(\varGamma (L,g):=\{ e \in \mathbb{F}_{p}^{n}\mid s_{e} \equiv 0\mod g\}\).

Writing \(s_{e}(x):=\sum _{i}s_{i}x^{i}\) for some \(s \in \mathbb{F}_{q}^{n}\), we can show that \(s^{\mathop{\mathrm{T}}\nolimits } = He^{\mathop{\mathrm{T}}\nolimits }\) with
$$\displaystyle{ \begin{array}{rcl} H & =&\mathop{toep}\nolimits (g_{1},\ldots,g_{t}) \\ & \cdot &\mathop{vdm}\nolimits _{t}(L_{0},\ldots L_{n-1}) \\ & \cdot &\mathop{diag}\nolimits (g(L_{0})^{-1},\ldots,g(L_{n-1})^{-1}) \end{array} }$$
(6)

Thus, H = TVD where T is a Toeplitz matrix t × t, V is a Vandermonde matrix t × n, and D is a diagonal matrix n × n according to the following definitions:

Definition 11

Given a sequence \((g_{1},\ldots,g_{t}) \in \mathbb{F}_{q}^{t}\) for some t > 0, the Toeplitz matrix\(\mathop{toep}\nolimits (g_{1},\ldots,g_{t})\) is the matrix t × t with components \(T_{\mathit{ij}}:= g_{t-i+j}\) for j ⩽ i and Tij: = 0 in other cases, namely,
$$\displaystyle{\mathop{toep}\nolimits (g_{1},\ldots,g_{t}) = \left [\begin{array}{cccc} g_{t} & 0 &\ldots & 0 \\ g_{t-1} & g_{t} &\ldots & 0\\ \vdots & \vdots &\ddots & \vdots\\ g_{ 1} & g_{2} & \ldots & g_{t} \end{array} \right ].}$$

Definition 12

Given t > 0 and a sequence \(L = (L_{0},\ldots,L_{n-1}) \in \mathbb{F}_{q}^{n}\) for some n > 0, the Vandermonde matrix\(\mathop{vdm}\nolimits (t,L)\) is the matrix t × n with components \(V _{\mathit{ij}} = L_{j}^{i}\), i.e.,
$$\displaystyle{\mathop{vdm}\nolimits (t,L) = \left [\begin{array}{ccc} 1 &\ldots & 1\\ L_{ 0} & \ldots & L_{n-1} \\ L_{0}^{2} & \ldots & L_{n-1}^{2}\\ \vdots & \ddots & \vdots \\ L_{0}^{t-1} & \ldots & L_{n-1}^{t-1} \end{array} \right ].}$$

Definition 13

Given a sequence \((d_{0},\ldots,d_{n-1}) \in \mathbb{F}_{q}^{n}\) for some n > 0, we denote by \(\mathop{diag}\nolimits (d_{0},\ldots,d_{n-1})\) a diagonal matrix with components Djj: = dj, 0⩽ j < n, and Dij: = 0 in other cases, namely,
$$\displaystyle{\mathop{diag}\nolimits (d_{0},\ldots,d_{n-1}) = \left [\begin{array}{cccc} d_{0} & 0 &\ldots & 0 \\ 0 &d_{1} & \ldots & 0\\ \vdots & \vdots &\ddots & \vdots \\ 0 & 0 &\ldots &d_{n-1} \end{array} \right ].}$$

4.2 Decodability

All codes [n, k] with distance d satisfy the Singleton limit, which states that \(d\leqslant n - k + 1\). The existence of a binary linear code [n, k] with distance d is guaranteed since:
$$\displaystyle{\sum _{j=0}^{d-2}\binom{n - 1}{j} < 2^{n-k}.}$$

This is called the Gilbert-Varshamov (GV) boundary. Random binary codes achieve the GV bound, in the sense that the above inequality is very close to equality [53]. There is no known family of binary codes, however, that can be decoded in subexponential time until the GV limit nor known subexponential algorithm for decoding general codes to the GV limit.

Consider a code \(\mathbb{F}_{p}\)-alternant with length n and able to decode t errors, derived from a code \(\mathop{GRS}\nolimits\) over \(\mathbb{F}_{p^{m}}\). The syndrome space have size pmt. However, decodable syndromes are only those that match the error vector with weight not exceeding t. In other words, only \(\sum _{w=1}^{t}\binom{n}{w}(p - 1)^{w}\) nonzero syndromes are uniquely decodable, and thus its density is
$$\displaystyle{\delta = \dfrac{1} {p^{\mathit{mt}}}\sum _{w=1}^{t}\binom{n}{w}(p - 1)^{w}.}$$
If the code length is a fraction 1∕pc of the maximum length for any c ⩾0, i.e., \(n = p^{m-c}\), the density can be approximated by
$$\displaystyle{\delta \approx \dfrac{1} {p^{\mathit{mt}}}\left (\dfrac{n^{t}} {t!} \right )(p - 1)^{t} = \dfrac{(p^{m-c})^{t}(p - 1)^{t}} {p^{\mathit{mt}}t!} = \left (\dfrac{p - 1} {p^{c}} \right )^{t} \dfrac{1} {t!}.}$$

A particularly good case is therefore δ ⩾1∕t! , which occurs when \(p^{c}/(p - 1)\leqslant 1\), i.e., c ⩽logp(p − 1) or \(n\geqslant p^{m}/(p - 1)\). Unfortunately this also means that, for binary codes, the highest densities are reached only by codes of maximum length or nearly maximum; otherwise the density is reduced by a factor 2ct. For codes of maximum length (n = pm and hence c = 0), the density simplifies to \(\delta \approx (p - 1)^{t}/t!\) that achieves the relative minimum δ ≈ 1∕t! for binary codes.

We will also be interested in the particular case of error patterns if a particular magnitude prevails over the others and more especially when all the error magnitudes are equal. In this case, the density of decodable syndromes is \(\delta \approx (p - 1)/t!\) which again reaches the minimum δ ≈ 1∕t! in binary codes.

4.3 Code-Based Cryptosystems

The original McEliece encryption schemes [55] and Niederreiter [64], despite the historic name, but inaccurate and undue, as cryptosystems, are best described as trapdoor one-way functions than as full encryption methods themselves. Functions of this nature can be transformed in various ways in cryptosystems, for example, Fujisaki-Okamoto transform.

Interestingly, McEliece and Niederreiter commonly show a substantial speed advantage over traditional processing schemes. For example, a code of length n presents time complexity O(n2), while Diffie-Hellman/DSA systems, as well as the operations of the RSA private exponent system, have time complexity O(n3) and keys with n bits.

For simplicity, the descriptions of McEliece and Niederreiter schemes below assume that patterns of correctable errors are binary vectors of weight t, but variants with broader patterns of error are possible, as the ability to decode the underlying code. Simple and effective criteria for choosing parameters are provided in Sect. 4.3.3. Each encryption scheme consists of three algorithms: MakeKeyPair, Encrypt, and Decrypt.

4.3.1 McEliece

  • MakeKeyPair. Given the desired level of security λ, choose a prime p (commonly p = 2), a finite field \(\mathbb{F}_{q}\) with q = pm for some m > 0, and a Goppa code Γ(L, g) with support \(L = (L_{0},\ldots,L_{n-1}) \in (\mathbb{F}_{q})^{n}\) (with distinct elements) and generator polynomial \(g \in \mathbb{F}_{q}[x]\) of degree t and free of squares satisfying g(Lj) ≠ 0, 0⩽ j < n. Let \(k = n -\mathit{mt}\). The choice is guided so that the cost of decoding a code [n, k, 2t + 1] is at least 2λ steps. Compute a systematic generator matrix \(G \in \mathbb{F}_{p}^{k\times n}\) for Γ(L, g), i.e., \(G = [I_{k}\mid - M^{T}]\) for any matrix \(M \in \mathbb{F}_{p}^{\mathit{mt}\times k}\) and Ik an identity matrix of order k. The private key is sk: = (L, g) and the public key is pk: = (M, t).

  • Encrypt. To encrypt a plaintext \(d \in \mathbb{F}_{p}^{k}\), we choose a vector \(e\stackrel{\;_{\$}}{\leftarrow }\{0,1\}^{n} \subseteq \mathbb{F}_{p}^{n}\) with weight \(\mathop{wt}\nolimits (e)\leqslant t\), and compute the encrypted text \(c \leftarrow \mathit{dG} + e \in \mathbb{F}_{p}^{n}\).

  • Decrypt. To decrypt an encrypted text \(c \in \mathbb{F}_{p}^{n}\) with the knowledge of L and g, we compute the decodable syndrome c, apply in it a decoder to determine the error vector e, and recover the plain text d from the first k columns of ce.

4.3.2 Niederreiter

  • MakeKeyPair. Given the desired level of security λ, choose a prime p (commonly p = 2), a finite field \(\mathbb{F}_{q}\) with q = pm for some m > 0, and a Goppa code Γ(L, g) with support \(L = (L_{0},\ldots,L_{n-1}) \in (\mathbb{F}_{q})^{n}\) (of distinct elements) and a generator polynomial \(g \in \mathbb{F}_{q}[x]\) of degree t and free of squares satisfying g(Lj) ≠ 0, 0⩽ j < n. Let \(k = n -\mathit{mt}\). The choice is guided so that the cost of decoding a code [n, k, 2t + 1] is at least 2λ steps. Compute a systematic parity matrix \(H \in \mathbb{F}_{p}^{\mathit{mt}\times n}\) for Γ(L, g), i.e., \(M = [M\mid I_{\mathit{mt}}]\) for some matrix \(M \in \mathbb{F}_{p}^{\mathit{mt}\times k}\) and Imt the identity matrix of order mt. Finally, choose as public parameter a function of rank permutation \(\phi: \mathcal{B}(n,t) \rightarrow \mathbb{Z}/\binom{n}{t}\mathbb{Z}\). The private key is sk: = (L, g) and the public key is pk: = (M, t, ϕ).

  • Encrypt. To encrypt a plaintext, \(d \in \mathbb{Z}/\binom{n}{t}\mathbb{Z}\) is represented d as a error pattern \(e \leftarrow \phi ^{-1}(d) \in \{ 0,1\}^{n} \subseteq \mathbb{F}_{p}^{n}\) of weight \(\mathop{wt}\nolimits (e) = t\), and compute as a ciphertext syndrome \(s \leftarrow \mathit{eH}^{\mathop{\mathrm{T}}\nolimits } \in \mathbb{F}_{p}^{\mathit{mt}}\).

  • Decrypt. To decrypt an encrypted text \(s \in \mathbb{F}_{p}^{\mathit{mt}}\) with the knowledge of L and g, this syndrome becomes another one decodable, applies to income results a decoder to determine the error vector e, and recovers from this the plaintext \(d \leftarrow \phi (e)\).

4.3.3 Parameters for Code-Based Cryptosystems

The classical schemes of McEliece and Niederreiter, implemented on the class of Goppa codes, remain safe until the present date, in contrast to implementations on many other families of codes proposed [41, 67]. Indeed, Goppa codes have weathered well the intense attempts of cryptanalysis, and despite considerable progress in the area [10] (see also [12] for review), they remain essentially intact for cryptographic purposes that have been suggested in the pioneering work of McEliece [55].

Table 3 suggests parameters for the underlying codes of cryptosystems such as McEliece or Niederreiter and size | pk | in bits of the resulting public key. Only generic Goppa codes are considered irreducible.

Table 3

Parameters for McEliece/Niederreiter using generic binary Goppa codes

m

n

k

t

lg WF

 | pk | 

 

11

1893

1431

42

80.025

661122

 

12

2887

2191

58

112.002

1524936

 

12

3307

2515

66

128.007

1991880

 

13

5397

4136

97

192.003

5215496

 

13

7150

5447

131

256.002

9276241

 

We notice that in this generic Goppa codes scenario, these schemes are adversely affected by very large keys compared to conventional counterparts. That is the importance of seeking ways to reduce the key sizes, maintaining intact the level of security associated.

The first steps toward the goal of reducing the size of the keys without reducing the level of security in post-quantum cryptosystems were given by Monico et al. through codes with low-density parity check (matrix) (LDPC codes) [61], after that by Gaborit with quasi-cyclic codes [31], and Baldi and Chiaraluce through a combination of both [4].

4.4 LDPC and QC-LDPC Codes

LDPC codes were invented by Robert Gallager [32] and are linear codes obtained from sparse bipartite graphs. Suppose that \(\mathbb{G}\) is a graph with n nodes on the left side (called message nodes) and r nodes on the right side (called verification nodes), as can be seen in Fig. 13 below. The graph gives rise to a linear code of size n and block size of at least nr as follows: the n coordinates of the code words are associated with n message nodes. The code words are the vectors \((c_{1},\ldots,c_{n})\) such that, for all verification nodes, the sum of positions between neighboring message nodes is zero.
Fig. 13

Bipartite graph

The graph representation is analogous to the matrix representation looking at the adjacency matrix of the graph: H is a binary matrix r × n whose entry (i, j) is 1 if and only if the ith check node is connected to the jth message node in the graph. Then the LDPC code defined by the graph is the set of vectors \(c = (c_{1},\ldots,c_{n})\) such that \(H \cdot c^{\mathop{\mathrm{T}}\nolimits } = 0\). The matrix M is called parity matrix for the code. Conversely, any binary matrix r × n gives rise to a bipartite graph between n message nodes and r verification nodes, and the code defined to null space of M is precisely the code associated with that graph. Therefore, any linear code has a representation such as a code associated with a bipartite graph (note that this graph is not defined solely by the code). However, not every binary linear code has a representation as a sparse bipartite graph. If there is, then the code is called low-density paritycheck code.

An important subclass of LDPC codes which have advantages over other codes in the same class of codes is the quasi-cyclic low-density parity check (QC-LDPC) [90]. In general, a [n, k] QC-LDPC code satisfies n = n0b and k = k0b (and thus also r = r0b) for some b, n0, k0 (and r0) and admits a parity matrix consisting of \(n_{0} \times r_{0}\) blocks of circulating sparse b × b submatrices. A particularly important case is when b = r (and r0 = 1 and \(k_{0} = n_{0} - 1\)), since a systematic parity matrix for this code is fully defined by the first line of each r × r block. It is said that the parity matrix is in the circulant form.

However, it was shown that all these proposals contain vulnerabilities that make them unsuitable for cryptographic purposes [67]. Indeed, in these methods, the trapdoor was essentially protected by any other mechanism including a private permutation of the underlying code. The attack strategy in this scenario is to obtain a soluble system of linear equations that the components of the permutation matrix must satisfy and was set up successfully because of the overly restrictive nature of the secret permutation (since it needs to preserve the quasi-cyclic structure of the result) and the fact that the secret code is a subcode of a very particular public code.

An attempt to fix the proposal of Baldi and Chiaraluce was presented [5]. More recently, Berger et al. [8] showed how to avoid the problems of the original Gaborit scheme and removed vulnerabilities previously known through two techniques:
  1. 1.

    Extract public keys shortened by blocks of very long private codes, exploring a theorem due to Wieschebrink about NP-completeness to distinguish shortened codes [92];

     
  2. 2.

    Working with the subfield subcodes of an intermediate field between the original field and the extension field of the original GRS code adopted by construction.

     

These techniques have been applied with some success to quasi-cyclic codes. However, almost all of this family of codes was subsequently broken due to structural failure of security, more precisely a relationship between the secret structure and certain multivariate quadratic equation systems [30].

Historical and experiential wisdom suggests, therefore, to restrict the search for more efficient parameters of code-based cryptosystems to the class of Goppa codes.

4.5 MDPC and QC-MDPC Codes

An interesting subclass of the LDPC codes consists of moderate density parity check codes (MDPC) and their quasi-cyclic variant (QC-MDPC) [60].

These codes, introduced by Misoczki et al., have densities low enough to enable decoding by simple (and arguably more efficient) methods of belief propagation and Gallager’s bit flipping. Yet densities are high enough to prevent attacks based on the presence of very sparse words in the dual code as seen in the Stern attack [87] and variants, without ruining the error correction capability, as well as keeping decoding attacks based on information set [10, 13] also unfeasible.

Moreover, to prevent structural attacks as proposed by Faugère et al. [30] and Leander and Gauthier [36], oriented encryption codes should be maintained as much as possible without structure except for the secret trapdoor that allows private decryption, and in the case of quasi-cyclic codes, external symmetries allow an efficient implementation. Finally, the circulant symmetry can introduce security weaknesses as pointed out by Sendrier [83], but with respect to attacking performance, it induces only a polynomial gain (specifically linear), and a small adjustment in the parameters completely eliminates this problem. Typical densities in this case are in the range from 0.4 to 0.9 % of the code size, an order of magnitude above LDPC codes, but much better than previously mentioned MDPC, and certainly appropriate for Gallager codes. The construction is also as random as possible, maintaining only the desired density and circulant geometry. Furthermore, the code size is much higher than typical values for MDPC.

4.6 Method for Gallager’s Hard Decision Decoding (Bit Flipping)

In this section we describe Gallager’s hard decision decoding algorithm, or more simply bit flipping, following the concise and clear description of Huffman and Pless [46]. This algorithm is necessary to recover the original message from the encrypted codeword with errors.

We assume that the codeword is encrypted with a binary LDPC code \(\mathbb{C}\) for transmission and the vector c is received. To calculate the syndrome \(s = \mathit{cH}^{\mathop{\mathrm{T}}\nolimits }\), each bit received from c affects at most dv components of this syndrome. If only the jth bit of c contains an error, then the corresponding dv with component si of s is equal to 1, indicating the parity check equations that are not satisfied. Even if you have a few other bits with error among those who contributed to the calculation of si, it is expected that several of dv components of s are equal to 1. This is the basis of the decoding algorithm of Gallager, both hard decision decoding and bit flipping:
  1. 1.

    Compute \(\mathit{cH}^{\mathop{\mathrm{T}}\nolimits }\) and determine the unsatisfied parity checks (namely, the parity checks where the components of \(\mathit{cH}^{\mathop{\mathrm{T}}\nolimits }\) equal 1).

     
  2. 2.

    For each of the n bits, compute the number of unsatisfied parity checks involving that bit.

     
  3. 3.

    Flip the bits of c that are involved in a number of unsatisfied parity check equations overcoming some threshold.

     
  4. 4.

    Repeat steps 1, 2, and 3 until either \(\mathit{cH}^{\mathop{\mathrm{T}}\nolimits } = 0\), in which case c has been successfully decoded, or until a certain bound in the number of iterations is reached, in which case decoding of the received vector has failed.

     

The bit-flipping algorithm is not the best method for decoding LDPC codes; in fact, the belief propagation technique [32, 46] is known for its ability to exceed correction errors. However, belief propagation decoders involve a computation with a probability increasingly refined for each bit of the received word c containing an error, incurring floating-point arithmetic and high-precision approaches that suit the process and computationally expensive algorithms. In a scenario where the number of errors is fixed and known in advance, as is the case for cryptographic applications, parameters can be adjusted so that complex and expensive decoding methods, such as belief propagation, are no longer needed.

4.7 Digital Signatures with Error Correcting Codes

After unsuccessful attempts to create a digital signature scheme based on error-correcting codes [2, 88], in 2001, Courtois, Finiasz, and Sendrier proposed a promising scheme [26].

4.7.1 CFS

The CFS has been proposed as a System of Digital Signatures based on McEliece Cryptographic System. By definition, a system of digital signature must provide a way to sign any document in such a way that uniquely identifies its author, and which has an efficient public signature verification algorithm. For these tasks, a linear code must be chosen, illustrated below as \(\mathcal{C}\). So, CFS uses a public hash function h to compress the document m by computing the vector h(m). Decoding this hash with the chosen error correction code algorithm, we obtain a vector c, corresponding to the signature of the message m. For signature verification, simply encrypt c, received with the message m, and verify that if it corresponds to the calculation of the hash of the message m, as follows:
  • Make Key Pair:
    1. 1.

      Choose a Goppa code G(L, g(X));

       
    2. 2.

      Compute a corresponding (nk) × n parity check matrix H;

       
    3. 3.

      Compute V = SHP, where S is a random binary invertible matrix \((n - k) \times (n - k)\) and P is a random permutation matrix n × n.

       

    The private key is G, and the public key is (V, t).

  • Signature:
    1. 1.

      Find the short \(i \in \mathbb{N}\) such that, for c = h(m, i) and \(c^{{\prime}} = S^{-1}c\), c is a decodable syndrome of G.

       
    2. 2.

      Using the decoding algorithm of G, compute the error vector e, whose syndrome is c, i.e., \(c^{{\prime}} = H(e^{{\prime}})^{t}\).

       
    3. 3.

      Compute \(e^{t} = P^{-1}(e^{{\prime}})^{t}\).

       

    Therefore, the signature is the pair (e, i).

  • Signature Verification:
    1. 1.

      Compute c = Vet.

       
    2. 2.

      Accept iff c = h(m, i).

       

Although CFS is a still safe signature scheme after going through many cryptanalysis, it is not suitable for standard applications commonly used today, since besides the size of public keys to sign the cost is too large for a set of reliable parameters.

5 Lattice-Based Schemes

From the mathematical point of view, historically lattices have been studied since the 18th century by mathematicians such as Lagrange and Gauss. However, the interest in cryptography starts more recently with Ajtai’s work, that proves the existence of one-way functions based on the hardness of the shortest vector problem (SVP). The versatility and flexibility of lattice based cryptography, in terms of possible cryptographic features and simplicity of the basic operations, make it one of the most promising lines of research in cryptography. Moreover, some lattice schemes are supported by security demonstrations that rely on the worst-case hardness of certain problems.

Lattice-based cryptography can be divided in two categories: (i) those with a security proof, as, for example, is the case of Ajtai’s construction or cryptosystems based on the LWE problem, whose encryption and decryption are quadratic or even cubic algorithms involving the manipulation of a matrix A, associated with the public key, which is not efficient when compared to conventional cryptography; and (ii) those without a security proof, but with efficient implementations, for example, the NTRU cryptosystem. A recent result [86] reduces the security of NTRU-based cryptosystems to the worst-case problem over ideal lattices. Although hard problems over lattices may not be hard over ideal lattices, no polynomial algorithm is known to solve them, even when considering a polynomial approximation factor or the utilization of quantum computation.

5.1 Basic Definitions

Definition 14

Let \(\mathbb{R}^{m}\) be a m-Dimensional Euclidean Vector Space, and \(B =\{ b_{1},\ldots,b_{n}\}\) be a set of n linearly independent vectors, the lattice \(\mathcal{L}\) in \(\mathbb{R}^{m}\) is the additive subgroup, that consists of all linear combinations of B with integer coefficients, in other words:
$$\displaystyle{\mathcal{L}(b_{1},\ldots,b_{n}) = \left \{\sum _{i=1}^{n}x_{ i}b_{i}: x_{i} \in \mathbb{Z}\right \},}$$
where the vectors \(b_{1},\ldots,b_{n}\) are the called basis vector of \(\mathcal{L}\) and the set B is called lattice basis.

Alternatively, it is possible to define lattices using matrix notation. Let \(B \in \mathbb{R}^{m\times n}\) be a matrix, the lattice generated by B is defined as \(\mathcal{L} = \left \{Bx\mid x \in \mathbb{Z}^{n}\right \}\), such that the determinant \(\det (B)\) is independent from the basis choice and corresponds geometrically to the inverse of the lattice point density in \(\mathbb{Z}^{m}\).

Definition 15

Given the lattice \(\mathcal{L}(B)\), the basis vectors can be seen as edges of a dimension n parallelepiped. Thus, we define \(\mathcal{P}(B) =\{ Bx\mid x \in [0,1)^{n}\}\), denominated the fundamental parallelepiped of B. We can also define a symmetric fundamental parallelepiped as \(\mathcal{P}_{1/2}(B) =\{ Bx\mid x \in [-1/2,1/2)^{n}\}\), the centralized fundamental parallelepiped of B. Figures 14 and 15 show examples of fundamental parallelepipeds on dimension 2.
Fig. 14

\(\mathcal{P}(B)\)

Fig. 15

\(\mathcal{P}_{1/2}(B)\)

Theorem 1

Let\(\mathcal{L}\)be a lattice and let\(\mathcal{P}(B)\)be the fundamental parallelepiped of\(\mathcal{L}\). Then, given an element\(w \in \mathcal{L}\), we can write w as\(w = v + t\), with\(v \in \mathcal{L}\)and\(t \in \mathcal{P}(B)\), such that t is uniquely determined. This operation is equivalent to a modular reduction, where the vector t is interpreted as\(w\pmod \mathcal{P}(B)\)(Fig. 16).
Fig. 16

Reduction modulo \(\mathcal{P}(B)\)

The volume of the fundamental parallelepiped is related with the determinant of B, and given by \(\mathop{Vol}\nolimits (\mathcal{P}(B)) = \vert \mathop{det}\nolimits (B)\vert \). Given two basis \(B =\{ b_{1},\ldots,b_{n}\}\) and \(B^{{\prime}} =\{ b_{1}^{{\prime}},\ldots,b_{n}^{{\prime}}\}\) for lattice \(\mathcal{L}\), we have that \(\mathop{det}\nolimits (B) = \pm \mathop{det}\nolimits (B^{{\prime}})\).

The most important computational problem in lattices is the shortest vector problem (SVP), it is defined as follows: given the lattice \(\mathcal{L}(B)\), one has to find a nonzero vector with minimum norm. In practice, it is used an approximation factor γ(n) such that we look for a vector whose norm is less than the minimum multiplied by γ(n).

The following problems are also important for cryptographic purposes:
  • closest vector problem (CVP). Given lattice \(\mathcal{L}(B)\) and a vector \(t \in \mathbb{R}^{m}\), the goal is to find the vector \(v \in \mathcal{L}(B)\) closest to t;

  • shortest independent vector problem (SIVP). Given basis \(B \in \mathbb{Z}^{m\times n}\), we must find n linearly independent lattice vectors \((v_{1},\ldots,v_{n})\), such that maximum norm among these vectors is minimum.

Definition 16

Given lattice \(\mathcal{L}\) and basis \(B = (v_{1},\ldots v_{n})\), the Hadamard ratio, denoted by \(\mathcal{H}(B)\), is defined as follows:
$$\displaystyle{\mathcal{H}(B) = \left ( \frac{\vert \mathop{det}\nolimits \mathcal{L}\vert } {\prod _{1\leq i\leq n}\vert \vert v_{i}\vert \vert }\right )^{1/n}.}$$

It is easy to show that, for any basis B, we have that \(0 \leq \mathcal{H}(B) \leq 1\). Furthermore, the closer this ratio is to 1, the “more orthogonal” is the basis.

A particularly important class in lattices is that of q-ary lattices class, denoted by Λq. Given an integer q, the vector coordinates are restricted to be elements in \(\mathbb{Z}_{q}\). Given the matrix \(A \in \mathbb{Z}_{q}^{n\times m}\), the q-ary lattice is determined by the rows of A, instead of the columns. That is, it is formed by vectors \(y = A^{T}s\pmod q\), for \(s \in \mathbb{Z}^{n}\). The orthogonal q-ary lattice, \(\varLambda _{q}^{\perp }\), corresponding to matrix A, is given by vectors y such that \(Ay = 0\pmod q\). Given the lattice \(\mathcal{L}\), the dual lattice, \(\mathcal{L}^{{\ast}}\), is formed by vectors y, such that \(\langle x,y\rangle \in \mathbb{Z}\), for \(x \in \mathcal{L}\). In particular, the q-ary orthogonal lattice, \(\varLambda _{q}^{\perp }(A)\), is the same as q Λq(A).

5.1.1 LLL Algorithm

The LLL Algorithm is important in lattice because the practical security analysis in general are based this algorithm. In fact, the LLL can be used to tackle the SVP and related problems, as we will see later. In this section we will describe the LLL algorithm. Given a lattice and a basis for it, LLL computes a new basis, with Hadamard ratio closer to 1. In other words, the LLL algorithm performs a basis reduction, because the computed basis has lower norm and greater orthogonality than the original one.

In a vector space with a basis \((v_{1},\cdots \,,v_{n})\), an orthonormal basis can easily be obtained by using the Gram-Schmidt algorithm. In lattices we can apply a similar approach using Gauss reduction. The idea used in Gauss reduction is the same as in Gram-Schmidt algorithm, where we have \(\mu _{\mathit{ij}} = v_{i}v_{j}^{{\ast}}/\vert \vert v_{j}^{{\ast}}\vert \vert ^{2}\), but the values μij are not necessarily integers. Thus, Gauss reduction considers the closest integers \(\lfloor \mu _{\mathit{ij}}\rceil \). The algorithm ends when this closest integers are zero, a condition that only in dimension 2 is sufficient to prove that the shortest vector was found.

Algorithm 5.1 Gauss reduction

Definition 17

Let \(B = (v_{1},\ldots,v_{n})\) be a basis for lattice \(\mathcal{L}\) and let \(B^{{\ast}} = (v_{1}^{{\ast}},\ldots,v_{n}^{{\ast}})\) be the Gram-Schmidt orthogonal basis. The basis B is called LLL-reduced if the following conditions are satisfied:
Norm condition:

\(\vert \mu _{i,j}\vert = \frac{v_{i}.v_{j}^{{\ast}}} {\vert \vert v_{j}^{{\ast}}\vert \vert ^{2}} \leq \frac{1} {2}\) for all 1 ≤ j < i ≤ n.

Lovász condition:

\(\vert \vert v_{i}^{{\ast}}\vert \vert ^{2} \geq (\frac{3} {4} -\mu _{i,i-1}^{2})\vert \vert v_{ i-1}^{{\ast}}\vert \vert ^{2}\) for all 1 < i ≤ n.

Theorem 2

Let B be an LLL-reduced basis for lattice\(\mathcal{L}\); then, B solves the SVP problem with approximation factor\(2^{(n-1)/2}\).

It is important to justify the choice of value 3∕4. If this value were replaced by 1, we would have a Gauss reduction. However, there is no proof that the algorithm would end in polynomial time. In fact, any value strictly less than 1 would be enough. Thus, cryptosystems based on SVP and CVP must have their parameters well chosen in order to avoid attacks based on the LLL algorithm.

Algorithm 5.2 LLL

In general, given a basis \((v_{1},\ldots,v_{n})\), it is possible to obtain a new basis satisfying the norm condition, just by subtracting multiples of \(v_{1},\ldots,v_{k-1}\) from vk, in order to reduce the absolute value of vk. If the norm condition is satisfied, we verify if Lovász condition is also satisfied; if not, the vectors are reordered and the procedure is repeated, executing the norm reduction again.

5.2 Lattice Based Hash

The first lattice-based cryptosystem was proposed by Ajtai [1]. This work is very important because it presented a worst-case reduction, in the sense that an attack to the cryptosystem leads to solutions of hard instances of problems on lattices. In particular, inverting the hash function has, in average, the same complexity as the SVP problem on dual lattices in the worst case.

Specifically, given integers n, m, d, q, we build a cryptographic hash family, \(f_{A}:\{ 0,\ldots,d - 1\}^{m} \rightarrow \mathbb{Z}_{q}^{n}\), indexed by matrix \(A \in \mathbb{Z}_{q}^{n\times m}\). Given a vector y, we have that \(f_{A}(y) = Ay\pmod q\). Algorithm 5.3 describes these operations. A possible parameter choice is \(d = 2,q = n^{2},m \approx 2n\log q/\log d\), such that we obtain a compression factor of 2.

The scheme’s security follows from the fact that if one is able to find a collision \(f_{A}(y) = f_{A}(y^{{\prime}})\), then it is possible to compute a short vector, yy in \(\mathcal{L}_{q}^{{\ast}}(A)\).

Algorithm 5.3 Ajtai’s hash

This proposal is really simple and can be efficiently implemented; however in practice, hash functions are designed in an ad-hoc way, without theoretical guarantees provided by a security proof, so that they are faster than Ajtai’s construction. Moreover, if an attacker has access to sufficiently many hash values, then it is possible to recover the fundamental domain of \(\mathcal{L}_{q}^{{\ast}}(A)\), allowing an attacker to compute collisions easily.

In 2011, Stehle and Steinfeld [86] proposed a collision-resistant hash function family with better algorithms, whose construction will be important to digital signature schemes, as we are going show in Sect. 5.4.

5.3 Lattice-Based Encryption

5.3.1 GGH

The GGH cryptosystem [42] allows us to easily understand the use of lattices in public key cryptography. This cryptosystem uses the orthonormality of the basis in the key pair definition. The private key is defined as a basis \(B_{\mathrm{priv}}\), formed by almost orthonormal vectors, namely, vectors with Hadamard ratio close to 1.

In general, the cryptosystem works as follows:
  • The encryption algorithm adds noise \(r \in \mathbb{R}^{n}\) to the plaintext \(m \in \mathcal{L}\), obtaining the ciphertext \(c = m + r\);

  • The decryption algorithm must be able to remove the inserted noise. Alternatively, it is necessary to solve an instance of the CVP problem.

Figure 17 shows a two-dimensional lattice, with basis given by vectors v1 and v2, almost orthogonal. Figure 18 shows a different basis to the same lattice, composed by vectors whose Hadamard ratio is close to zero.
Fig. 17

Good basis

Fig. 18

Bad basis

In lattices with high dimension, if basis orthonormality is closer to zero, then the CVP problem becomes harder. Thus, we can define the public key as a basis Bpub, such that \(\mathcal{H}(B_{\mathrm{pub}})\) is close to zero. Furthermore, if we know the private key Bpriv, then it is possible to use Babai’s algorithm [3], defined below, to recover the plaintext.

Algorithm 5.4 Babai’s algorithm

The general idea of Babai’s algorithm is to represent the vector c using private basis \(B_{\mathrm{priv}}\), solving the linear system in n equations. As \(c \in \mathbb{R}^{n\times n}\), to obtain a lattice point \(\mathcal{L}\), each coefficient \(t_{i} \in \mathbb{R}^{n}\) must be approximated to the nearest integer ai, where this operation is denoted by \(a_{i} \leftarrow \lfloor t_{i}\rceil \). This procedure is simple and works very well, since the basis Bpriv is sufficiently orthonormal, reducing rounding errors.

One way to attack the cryptosystem is trying to reduce the basis Bpub, in order to obtain shorter vectors, with Hadamard ratio close to 1. In dimension two the problem can be easily solved using Gauss reduction (algorithm 5.1). For higher dimensions the problem is considered hard, although in 1982 there was a great advance, with the invention of LLL algorithm [50]. Thus, the cryptosystem parameters must be designed to resist LLL basis reduction.

5.3.2 NTRU

The NTRU cryptosystem [45] is originally constructed over polynomial rings but can also be defined over lattices, because the underlying problem can be interpreted as being SVP and CVP. Hence, the solution of these problems would mean an attack to the cryptosystem if parameters are not carefully chosen.

The cryptosystem uses the following polynomial rings: \(R = \mathbb{Z}[x]/(x^{N} - 1)\), \(R_{p} = (\mathbb{Z}/p\mathbb{Z})[x]/(x^{N} - 1)\) and \(R_{q} = (\mathbb{Z}/q\mathbb{Z})[x]/(x^{N} - 1)\), where N, p, q are positive integers.

Definition 18

Given positive integers d1 and d2, we define \(\mathcal{T} (d_{1},d_{2})\) as the class of polynomials that have d1 coefficients equal to 1, d2 coefficients equal to − 1 and the remaining coefficients equal to zero. These polynomials are called ternary polynomials.

The parameters are given by (N, p, q, d), where N and p are prime numbers, \((p,q) = (N,q) = 1\) and q > (6d + 1)p. The private key corresponds to the polynomials \(f(x) \in \mathcal{T} (d + 1,d)\) and \(g(x) \in \mathcal{T} (d,d)\). Public key is given by polynomial h(x) ≡ Fq(x). g(x), where Fq(x) is the multiplicative inverse of f(x) in Rq.

Given message m(x) ∈ R, with coefficients in the interval \([-p/2,p/2]\), r(x) is randomly chosen and the ciphertext is computed by \(e(x) \equiv ph(x).r(x) + m(x)\pmod q\).

To decrypt, we first compute the function \(a(x) \equiv f(x).e(x)\pmod q\), such that its coefficients are in the interval \([-q/2,q/2]\). The message m(x) is obtained by computing \(m(x) \equiv F_{p}(x).a(x)\pmod p\):
  • KeyGen. Choose \(f \in \mathcal{T} (d + 1,d)\) such that f has inverse in Rq and Rp. Choose also \(g \in \mathcal{T} (d,d)\). Compute Fq as the inverse of f in Rq and, analogously, Fp is the inverse of f in Rp. The public key is given by h = Fq. g.

  • Encrypt. Given plaintext m ∈ Rp, choose randomly \(r \in \mathcal{T} (d,d)\) and compute \(e \equiv pr.h + m\pmod q\), where h is a public key.

  • Decrypt. Compute \(a = \lfloor f.e\rceil _{q} \equiv \lfloor pg.r + f.m\rceil _{q}\). Finally, the message can be obtained computing \(m \equiv F_{p}.a\pmod p\).

5.3.3 LWE (Learning with Errors)-Based Encryption

In this section, we are going to present a cryptosystem based on the LWE problem, that is, an efficient proposal with security proof based on worst-case problems over lattices [80]. This proof was a quantum reduction; in other words, it shows that a cryptosystem vulnerability implies the existence of a quantum algorithm to solve hard problems over lattices. In 2009, Peikert gave a classical reduction for the security proof [73].

Definition 19

The LWE problem consists in finding the vector \(s \in \mathbb{Z}_{q}^{n}\), given the equations \(\langle s,a_{i}\rangle + e_{i} = b_{i}\pmod q\), for 1 ≤ i ≤ n. The values ei are small errors that were inserted according to the distribution \(\mathcal{D}\), generally taken as a Gaussian distribution.

In 2010, Lyubaskevsky, Peikert, and Regev used polynomial rings to define the scheme RLWE [52]. Let \(f(x) = x^{d} + 1\), where d is a power of 2. Given the integer q and an element \(s \in R_{q} = \mathbb{Z}_{q}[x]/f(x)\), the ring-LWE problem over Rq, with respect to the distribution \(\mathcal{D}\), is defined as that of finding s satisfying equations \(s.a_{i} + e_{i} = b_{i}\pmod R_{q}\), for 1 ≤ i ≤ n, such that ai and bi are elements of Rq, and modular reduction on Rq is the same as reducing by the polynomial modulo f(x) and its coefficients modulo q. The LWE based cryptosystem problem can be constructed as follows:
  • KeyGen. Choose randomly a ∈ Rq and generate s and e in R using distribution \(\mathcal{D}\). The private key is given by s, while the public key is given by \((a,b = a.s + e)\).

  • Encrypt. To encrypt d bits, it is possible to interpret these bits as R polynomial coefficients. The encryption algorithm then chooses \(r,e_{1},e_{2} \in R\), using the same distribution \(\mathcal{D}\) and computes (u, v) in the following way:
    $$\displaystyle\begin{array}{rcl} u& =& a.r + e_{1}\pmod q\text{,} {}\\ v& =& b.r + e_{2} + \lfloor q/2\rfloor.z\pmod q. {}\\ \end{array}$$
  • Decrypt. To decrypt, the algorithm computes
    $$\displaystyle{v - u.s = (r.e - s.e_{1} + e_{2}) + \lfloor q/2\rfloor.z\pmod q.}$$

According to the parameters choice, we have that \((r.e - s.e_{1} + e_{2})\) has maximum size q∕4, such that each plaintext bit can be computed verifying each coefficient from the obtained result. If the coefficient is closer to 0 than to q∕2, then the corresponding bit is 0; otherwise it is 1.

Some concepts in this section, for example, the cyclotomic polynomial ring and the Gaussian distribution \(\mathcal{D}\), were recently incorporated to the NTRU scheme, allowing us to construct a semantically secure scheme efficient for lattice-based encryption [86], whose public and private keys and encryption and decryption algorithms have complexity \(\tilde{O}(\lambda )\), such that λ is the security parameter.

5.3.4 Homomorphic Encryption

In 2009, Gentry proposed the construction of a fully homomorphic encryption scheme [37], solving a problem open since 1978, when Rivest, Adleman, and Dertouzos conjectured the existence of privacy homomorphisms [62], such that the encryption function is also an algebraic homomorphism. In other words, it is possible to add and multiply encrypted texts, so that when decrypted we obtain the corresponding result with respect to the same operation, executed using corresponding plaintexts.

If the plaintext space is given by { 0, 1 }, then bit addition is equivalent to logic exclusive disjunction, while multiplication is equivalent to logic conjunction. Hence, it is possible to compute any Boolean circuit over encrypted data, which implies that we can evaluate any algorithm homomorphically with encrypted arguments, obtaining an encrypted output.

Using homomorphic encryption it is possible to delegate algorithm computation to a server, maintaining input confidentiality. This is interesting for cloud computing, because it allows the construction of applications such as encrypted databases, encrypted disks, encrypted search engines, etc.

The computational complexity of performing homomorphic encryption is, nevertheless, still a hindrance for its practical utilization. Recently, Brakerski proposed the use of the LWE problem to construct fully homomorphic encryption [19], reducing the algorithms’ complexities and achieving polylogarithmic complexity per operation. Brakerski used a new way to manage noise growth, which allowed us to execute a greater number of multiplications. In particular, he proposed a modulo reduction algorithm, which implicitly reduced the noise growth rate. An algorithm called dimension reduction allows us to replace the bootstrapping procedure by a new method (similar in many aspects), which leads to better parameters. Nevertheless, even considering recent optimizations, homomorphic encryption remains not practical.

5.4 Digital Signatures

GGH and NTRU cryptosystems can be converted into digital signature schemes [12]. However, such proposals do not have a security proof and, in fact, there are attacks that allow us to recover the private key given a sufficiently large number of signatures [63].

In 2007, Gentry, Peikert, and Vaikuntanathan [39] created a new kind of trapdoor function f with an extra property: an efficient algorithm that, using the trapdoor, samples elements from the preimage of f. A composition of Gaussians is used to obtain a point close to a lattice vector. This distribution has standard deviation greater than the basis vector within maximum norm, such that reduction by fundamental parallelepiped has indistinguishable distribution from uniform. Furthermore, this construction does not reveal the lattice geometry, because the normal distribution is spherical. Given message m and a hash function H that maps plaintexts to the preimage of f, we compute the point y = H(m). The signature is given by \(\delta = f^{-1}(y)\). To verify the signature, we compute f(δ) = H(m). This kind of construction was proposed by Bellare and Rogaway [7], using trapdoor permutations and modeling H as a random oracle. Therefore, a digital signature scheme is obtained, with existential unforgeability under adaptive chosen plaintext attack. We can use a Gaussian to generate the noise e, such that f(e) = y and \(y = v + e\), for a point v chosen uniformly in the lattice. Thus, the construction has a security proof based on lattices worst-case problems.

This construction can be viewed with respect to two functions: \(f_{A}(x) = Ax\pmod q\), Ajtai’s construction, and \(g_{A}(s,e) = A^{T}s + e\), LWE problem, such that the first function is surjective and the second is injective. In 2012, Micciancio and Peikert [58] showed a simple, secure, and efficient way to invert gA and sample from the preimage of fA, allowing the construction of an efficient digital signature scheme. In this proposal, the Gaussian composition allowed parallelism (in later work [39], and subsequent proposals [86], it was inherently sequential), leading to a concrete improvement. The optimizations described above can be used in applications based on function gA or sampling from preimage of fA; hence, it is not only important for digital signatures, but also to secure encryption construction in the adaptive chosen ciphertext attack.

5.5 Other Applications

Lattice-based cryptography is interesting not only because it resists to quantum attacks but also because it is a flexible alternative to the construction of cryptosystems. In particular, the ring-LWE problem has become more and more important, as it allows us to construct stronger trapdoor functions, with better parameters for both security and performance [58].

Gentry [38] analyzed how flexible a cryptosystem can be, considering not just fully homomorphic encryption, but also with respect to access control. Thus, lattice-based cryptography seems to be, according to Gentry, a feasible alternative to explore the limits of possible applications with cryptography. Among other applications, we emphasize the following:
  • multilinear maps. Bilinear pairings can be used in different contexts, as, for example, in identity-based encryption. The generalization of this concept, called multilinear maps, is very useful and, although no proposal appeared for a while, many applications were suggested. Using the noise concept, also present in homomorphic encryption, Garg, Gentry, and Halevi achieved the construction of multilinear maps [34];

  • identity-based encryption. For some time, identity-based encryption was only achievable by using bilinear pairings. Using lattices, many proposals were put forward [17, 39], built upon the dual scheme \(\mathcal{E}\), composed by algorithms \(\{\mathop{DualKeyGen}\nolimits,\mathop{DualEnc}\nolimits,\mathop{DualDec}\nolimits \}\), as pointed out in Sect. 5.3.3. Specifically, \(\mathop{DualKeyGen}\nolimits\) computes the private key as the error e, chosen using the Gaussian distribution, while the public key is given by u = fA(e). To encrypt a bit b, the algorithm \(\mathop{DualEnc}\nolimits\) randomly chooses s, chooses x and e according to the Gaussian, and computes \(c_{1} = g_{A}(s,x)\) e \(c_{2} = u^{T}s + e^{{\prime}} + b.\lfloor q/2\rfloor \). The ciphertext is \(\langle c_{1},c_{2}\rangle\). Finally, \(\mathop{DualDec}\nolimits\) computes \(b = c_{2} - e^{T}c_{1}\). Then, given the hash function H, modeled as a random oracle mapping identities to public keys of the dual cryptosystem, the identity-based encryption scheme was constructed as follows:
    • Setup. Choose the public key \(A \in \mathbb{Z}_{q}^{n\times m}\) and the master key as the trapdoor s, according to the description in Sect. 5.4;

    • Extraction. Given the identity \(\mathrm{id}\), we compute u = H(id) and the decryption key \(e = f^{-1}(u)\), using the trapdoor preimage sampling algorithm with trapdoor s;

    • Encrypt. Given bit b, return \(\langle c_{1},c_{2}\rangle =\mathop{ DualEnc}\nolimits (u,b)\);

    • Decrypt. Return \(\mathop{DualDec}\nolimits (e,\langle c_{1},c_{2}\rangle )\).

  • functional encryption. Functional encryption is a new primitive in cryptography, that opens new horizons [51]. In this system, a public function f(x, y) determines what the user that knows the key y can infer from a ciphertext, denoted by cx, according to parameter x. Within this model, the one who encrypts a message m can previously choose what kind of information is obtained after decryption. Moreover, a trusted party is responsible for key sy generation, which can be used to decrypt cx, returned as output for f(x, y), without necessarily revealing information about m. With this approach it is possible to define an identity-based encryption scheme as a functional encryption special case, such that x = (m, id) and f(x, y) = m if and only if y = id. A recent result [35] proposes the construction of a functional encryption scheme based on lattices, being able to deal with any polynomial-size Boolean circuit;

  • attribute-based encryption. This is a functional encryption special case, such that x = (m, ϕ) and f(x, y) = m if and only if ϕ(y) = 1. Namely, the decryption works since y, the decryptor’s attribute, satisfies the predicate ϕ, such that the encryptor can determine a access control policy (predicate ϕ) for the cryptosystem. There are proposals to achieve this kind of operations based on the LWE problem [82], and the multilinear map construction mentioned above has been used by Sahai and Waters [40] to propose an attributed-based scheme for any Boolean circuit, showing one more time the versatility of lattice-based cryptography;

  • obfuscation. There is a negative result proving that obfuscation is impossible in a certain security model. However, it was recently proposed the construction of indistinguishability obfuscation which, in a different security model, is proved to be the best possible approach. The LWE problem was used to construct this kind of primitive [35], being part of a functional encryption construction. Such schemes, therefore, although versatile, are relevant mostly for their theoretical importance rather than for their practical applications.

Concluding Remarks

As we have seen, not all is lost for the deployment of efficient and flexible cryptosystems in a scenario where large quantum computers are a technological reality. Many proposals have already attained a fairly good level of maturity, and one can even discern some patterns in schemes based on different underlying security assumptions, in the sense of there existing strikingly similar schemes based on codes, lattices,\(\mathcal{M}\mathcal{Q}\) systems and sometimes even hash functions. Determining how far the analogies can go (and why) is an interesting line for future investigation.

At the same time, practical considerations are ever more often being addressed in the literature, as they are as important as theoretical ones in a truly post-quantum scenario where conventional systems would have to be replaced.

The fact that post-quantum schemes can also provide functionalities not available elsewhere has already been, and is likely to continue to be, a strong additional motivation for further research in the area.

Notes

Acknowledgements

Paulo S. L. M. Barreto, Ricardo Dahab and Julio López acknowledge support by the Brazilian National Council for Scientific and Technological Development (CNPq) research productivity grants 306935/2012-0, 311530/2011-7, and 309258/2011-1, respectively.

References

  1. 1.
    M. Ajtai, Generating hard instances of lattice problems (extended abstract), in Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, STOC ‘96 (ACM, New York, 1996), pp. 99–108Google Scholar
  2. 2.
    M. Alabbadi, S.B. Wicker, A digital signature scheme based on linear error-correcting block codes, in Advances in Cryptology – Asiacrypt ‘94, vol. 917 of Lecture Notes in Computer Science (Springer, New York, 1994), pp. 238–348Google Scholar
  3. 3.
    L Babai, On lovsz lattice reduction and the nearest lattice point problem. Combinatorica 6(1), 1–13 (1986)CrossRefMATHMathSciNetGoogle Scholar
  4. 4.
    M. Baldi, F. Chiaraluce, Cryptanalysis of a new instance of McEliece cryptosystem based on QC-LDPC code, in IEEE International Symposium on Information Theory – ISIT 2007 (IEEE, Nice, 2007), pp. 2591–2595Google Scholar
  5. 5.
    M. Baldi, F. Chiaraluce, M. Bodrato, A new analysis of the McEliece cryptosystem based on QC-LDPC codes, in Security and Cryptography for Networks – SCN 2008, vol. 5229 of Lecture Notes in Computer Science (Springer, Amalfi, 2008), pp. 246–262Google Scholar
  6. 6.
    R. Barbulescu, P. Gaudry, A. Joux, E. Thomé, A quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic. HAL-INRIA technical report, http://hal.inria.fr/hal-00835446/ (2013)
  7. 7.
    M. Bellare, P. Rogaway, Random oracles are practical: A paradigm for designing efficient protocols, in Proceedings of the 1st ACM conference on Computer and communications security (ACM, 1993), pp. 62–73Google Scholar
  8. 8.
    T.P. Berger, P.-L. Cayrel, P. Gaborit, A. Otmani, Reducing key length of the McEliece cryptosystem, in Progress in Cryptology – Africacrypt 2009, Lecture Notes in Computer Science (Springer, Gammarth, 2009), pp. 77–97Google Scholar
  9. 9.
    E. Berlekamp, R. McEliece, H. van Tilborg, On the inherent intractability of certain coding problems. IEEE Trans. Inf. Theory 24(3), 384–386 (1978)CrossRefMATHGoogle Scholar
  10. 10.
    D. Bernstein, T. Lange, C. Peters, Smaller decoding exponents: ball-collision decoding, in Advances in Cryptology – Crypto 2011, vol. 6841 of Lecture Notes in Computer Science (Springer, Santa Barbara, 2011), pp. 743–760Google Scholar
  11. 11.
    D.J. Bernstein, List decoding for binary Goppa codes, in Coding and Cryptology—Third International Workshop, IWCC 2011, Lecture Notes in Computer Science (Springer, Qingdao, 2011), pp. 62–80Google Scholar
  12. 12.
    D.J. Bernstein, J. Buchmann, E. Dahmen, Post-Quantum Cryptography (Springer, Heidelberg, 2008)Google Scholar
  13. 13.
    D.J. Bernstein, T. Lange, C. Peters, Attacking and defending the McEliece cryptosystem, in Post-Quantum Cryptography – PQCrypto 2008, vol. 5299 of Lecture Notes in Computer Science (Springer, New York, 2008), pp. 31–46. http://www.springerlink.com/content/68v69185x478p53g
  14. 14.
    D.J. Bernstein, T. Lange, C. Peters, Wild McEliece, in Selected Areas in Cryptography – SAC 2010, vol. 6544 of Lecture Notes in Computer Science (Springer, Waterloo, 2010), pp. 143–158Google Scholar
  15. 15.
    G. Bertoni, J. Daemen, M. Peeters, G. Van Assche, Keccak specifications. Submission to NIST (2010). http://keccak.noekeon.org/Keccak-specifications.pdf
  16. 16.
    G. Bertoni, J. Daemen, M. Peeters, G. Van Assche, Sponge functions. ECRYPT Hash Workshop 2007 (2007). Also available as public comment to NIST from http://www.csrc.nist.gov/pki/HashWorkshop/Public_Comments/2007_May.html
  17. 17.
    D. Boneh, C. Gentry, M. Hamburg, Space-efficient identity based encryption without pairings, in FOCS, pp. 647–657 (2007)Google Scholar
  18. 18.
    A. Braeken, C. Wolf, B. Preneel, A study of the security of unbalanced oil and vinegar signature schemes, in Topics in Cryptology – CT-RSA 2005, vol. 3376 of Lecture Notes in Computer Science (Springer, New York, 2005), pp. 29–43Google Scholar
  19. 19.
    Z. Brakerski, V. Vaikuntanathan, Efficient fully homomorphic encryption from (standard) lwe. Electron. Colloq. Comput. Complex. 18, 109 (2011)Google Scholar
  20. 20.
    J. Buchmann, C. Coronado, E. Dahmen, M. Dring, E. Klintsevich, CMSS – an improved merkle signature scheme, in Progress in Cryptology INDOCRYPT 2006, vol. 4329 of Lecture Notes in Computer Science (Springer, New York, 2006), pp. 349–363Google Scholar
  21. 21.
    J. Buchmann, E. Dahmen, S. Ereth, A. Hlsing, M. Rckert, On the security of the Winternitz one-time signature scheme, in Progress in Cryptology – AFRICACRYPT 2011, vol. 6737 of Lecture Notes in Computer Science (Springer, New York, 2011), pp. 363–378Google Scholar
  22. 22.
    J. Buchmann, E. Dahmen, A. Hlsing, XMSS-a practical secure signature scheme based on minimal security assumptions, in Cryptology ePrint Archive - Report 2011/484. ePrint (2011)Google Scholar
  23. 23.
    J. Buchmann, E. Dahmen, E. Klintsevich, K. Okeya, C. Vuillaume, Merkle signatures with virtually unlimited signature capacity, in Applied Cryptography and Network Security – ACNS 2007, vol. 4521 of Lecture Notes in Computer Science (Springer, New York, 2007), pp. 31–45Google Scholar
  24. 24.
    J. Buchmann, E. Dahmen, M. Schneider, Merkle tree traversal revisited, in Post-Quantum Cryptography – PQCrypto 2008, vol. 5299 of Lecture Notes in Computer Science (Springer, New York, 2008), pp. 63–78Google Scholar
  25. 25.
    S. Contini, A.K. Lenstra, R. Steinfeld, VSH, an Efficient and Provable Collision Resistant Hash Function. Cryptology ePrint Archive, Report 2005/193 (2005). http://eprint.iacr.org/
  26. 26.
    N. Courtois, M. Finiasz, N. Sendrier, How to achieve a McEliece-based digital signature scheme, in Advances in Cryptology – Asiacrypt 2001, vol. 2248 of Lecture Notes in Computer Science (Springer, Gold Coast, 2001), pp. 157–174Google Scholar
  27. 27.
    R.A. DeMillo, D.P. Dobkin, A.K. Jones, R.J. Lipton, Foundations of Secure Computation (Academic Press, New York, 1978)MATHGoogle Scholar
  28. 28.
    J. Ding, D. Schmidt, Rainbow, a new multivariable polynomial signature scheme, in International Conference on Applied Cryptography and Network Security – ACNS 2005, vol. 3531 of Lecture Notes in Computer Science (Springer, New York, 2005), pp. 164–175Google Scholar
  29. 29.
    C. Dods, N. Smart, M. Stam, Hash based digital signature schemes, in Cryptography and Coding, vol. 3796 of Lecture Notes in Computer Science (Springer, New York, 2005), pp. 96–115Google Scholar
  30. 30.
    J.-C. Faugère, A. Otmani, L. Perret, J.-P. Tilllich, Algebraic cryptanalysis of McEliece variants with compact keys, in Advances in Cryptology – Eurocrypt 2010, vol. 6110 of Lecture Notes in Computer Science (Springer, Nice, 2010), pp. 279–298Google Scholar
  31. 31.
    P. Gaborit, Shorter keys for code based cryptography, in International Workshop on Coding and Cryptography – WCC 2005 (ACM Press, Bergen, 2005), pp. 81–91Google Scholar
  32. 32.
    R.G. Gallager, Low-density parity-check codes. Information Theory, IRE Transactions on 8(1), 21–28 (1962)CrossRefMATHMathSciNetGoogle Scholar
  33. 33.
    M.R. Garey, D.S. Johnson, Computers and Intractability – A Guide to the Theory of NP-Completeness (W. H. Freeman and Company, New York, 1979)MATHGoogle Scholar
  34. 34.
    S. Garg, C. Gentry, S. Halevi, Candidate multilinear maps from ideal lattices, in Advances in Cryptology – EUROCRYPT 2013, pp. 1–17 (2013)Google Scholar
  35. 35.
    S. Garg, C. Gentry, S. Halevi, M. Raykova, A. Sahai, B. Waters, Candidate indistinguishability obfuscation and functional encryption for all circuits, IACR Cryptology ePrint Archive 2013, 451 (2013)Google Scholar
  36. 36.
    V. Gauthier, G. Leander, Practical key recovery attacks on two McEliece variants, in International Conference on Symbolic Computation and Cryptography – SCC 2010 (Springer, Egham, 2010)Google Scholar
  37. 37.
    C. Gentry, A fully homomorphic encryption scheme. PhD thesis, Stanford University, 2009. crypto.stanford.edu/craig
  38. 38.
    C. Gentry, Encrypted messages from the heights of cryptomania, in TCC, pp. 120–121 (2013)Google Scholar
  39. 39.
    C. Gentry, C. Peikert, V. Vaikuntanathan, Trapdoors for hard lattices and new cryptographic constructions, in Proceedings of the 40th Annual ACM Symposium on Theory of Computing, STOC ‘08 (ACM, New York, 2008), pp. 197–206Google Scholar
  40. 40.
    C. Gentry, A. Sahai, B. Waters, Homomorphic encryption from learning with errors: Conceptually-simpler, asymptotically-faster, attribute-based, in Advances in Cryptology – CRYPTO ‘89, vol. 8042 of Lecture Notes in Computer Science (Springer, New York, 2013), pp. 75–92Google Scholar
  41. 41.
    J.K. Gibson, The security of the Gabidulin public key cryptosystem, in Advances in Cryptology – Eurocrypt ‘96, vol. 1070 of Lecture Notes in Computer Science (Springer, Zaragoza, 1996), pp. 212–223Google Scholar
  42. 42.
    O. Goldreich, S. Goldwasser, S. Halevi, Public-key cryptosystems from lattice reduction problems, in Advances in Cryptology – CRYPTO ‘97, vol. 1294 of Lecture Notes in Computer Science (Springer, New York, 1997), pp. 112–131Google Scholar
  43. 43.
    V.D. Goppa, A new class of linear error correcting codes. Problemy Peredachi Informatsii 6, 24–30 (1970)MATHMathSciNetGoogle Scholar
  44. 44.
    A. Hülsing, Practical forward secure signatures using minimal security assumptions. PhD thesis, TU Darmstadt, 2013Google Scholar
  45. 45.
    J. Hoffstein, J. Pipher, J.H. Silverman, Ntru: A ring-based public key cryptosystem, in Lecture Notes in Computer Science (Springer, New York, 1998), pp. 267–288Google Scholar
  46. 46.
    W.C. Huffman, V. Pless, Fundamentals of Error-Correcting Codes (Cambridge University Press, Cambridge, 2003)CrossRefMATHGoogle Scholar
  47. 47.
    A. Kipnis, A. Shamir, Cryptanalysis of the oil and vinegar signature scheme, in ed. by H. Krawczyk. Advances in Cryptology – Crypto 1998, vol. 1462 of Lecture Notes in Computer Science (Springer, New York, 1998), pp. 257–266Google Scholar
  48. 48.
    A. Kipnis, J. Patarin, L. Goubin, Unbalanced oil and vinegar signature schemes, in ed. by J. Stern. Advances in Cryptology – EUROCRYPT ‘99, vol. 1592 of Lecture Notes in Computer Science (Springer, New York, 1999), pp. 206–222Google Scholar
  49. 49.
    L. Lamport, Constructing digital signatures from a one way function, in SRI International. CSL-98 (1979)Google Scholar
  50. 50.
    A.K. Lenstra, H.W. Lenstra, L. Lovsz, Factoring polynomials with rational coefficients. Math. Ann. 261(4), 515–534 (1982)CrossRefMATHMathSciNetGoogle Scholar
  51. 51.
    A. Lewko, T. Okamoto, A. Sahai, K. Takashima, B. Waters, Fully secure functional encryption: Attribute-based encryption and (hierarchical) inner product encryption, in H. Gilbert. Advances in Cryptology – EUROCRYPT 2010, vol. 6110 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2010), pp. 62–91Google Scholar
  52. 52.
    V. Lyubashevsky, C. Peikert, O. Regev, On ideal lattices and learning with errors over rings. Adv. Cryptology EUROCRYPT 2010 6110/2010(015848), 1–23 (2010)Google Scholar
  53. 53.
    F.J. MacWilliams, N.J.A. Sloane, The Theory of Error-Correcting Codes, vol. 16 (North-Holland Mathematical Library, Amsterdam, 1977)MATHGoogle Scholar
  54. 54.
    S.M. Matyas, C.H. Meyer, J. Oseas, Generating strong one-way functions with cryptographic algorithm, IBM Techn. Disclosure Bull., 1985Google Scholar
  55. 55.
    R. McEliece, A public-key cryptosystem based on algebraic coding theory. The Deep Space Network Progress Report, DSN PR 42–44, 1978. http://ipnpr.jpl.nasa.gov/progressreport2/42-44/44N.PDF. Acesso em:.
  56. 56.
    R.C. Merkle, Secrecy, Authentication, and Public Key Systems. Stanford Ph.D. thesis, 1979Google Scholar
  57. 57.
    R.C. Merkle, A digital signature based on a conventional encryption function, in Advances in Cryptology – CRYPTO’87, vol. 435 of Lecture Notes in Computer Science (Springer, New York, 1987), pp. 369–378Google Scholar
  58. 58.
    D. Micciancio, C. Peikert, Trapdoors for lattices: Simpler, tighter, faster, smaller, in ed. by D. Pointcheval, T. Johansson. Advances in Cryptology EUROCRYPT 2012, vol. 7237 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2012), pp. 700–718Google Scholar
  59. 59.
    V.S. Miller, Use of elliptic curves in cryptography, in Advances in Cryptology — Crypto ‘85 (Springer, New York, 1986), pp. 417–426Google Scholar
  60. 60.
    R. Misoczki, N. Sendrier, J.-P. Tilllich, P.S.L.M. Barreto, MDPC-McEliece: New McEliece variants from moderate density parity-check codes. Cryptology ePrint Archive, Report 2012/409, 2012. http://eprint.iacr.org/2012/409
  61. 61.
    C. Monico, J. Rosenthal, A. Shokrollahi, Using low density parity check codes in the McEliece cryptosystem, in IEEE International Symposium on Information Theory – ISIT 2000 (IEEE, Sorrento, 2000), p. 215Google Scholar
  62. 62.
    E.M. Morais, R. Dahab, Encriptao homomrfica, in XII Simpsio Brasileiro em Segurana da Informao e de Sistemas Computacionais: Minicursos, SBSeg (2012)Google Scholar
  63. 63.
    P. Nguyen, O. Regev, Learning a parallelepiped: Cryptanalysis of ggh and ntru signatures, in S. Vaudenay. Advances in Cryptology - EUROCRYPT 2006, vol. 4004 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2006), pp. 271–288Google Scholar
  64. 64.
    H. Niederreiter, Knapsack-type cryptosystems and algebraic coding theory. Prob. Control Inf. Theory 15(2), 159–166 (1986)MATHMathSciNetGoogle Scholar
  65. 65.
    NIST, Federal Information Processing Standard FIPS 186-3 – Digital Signature Standard (DSS) – 6. The Elliptic Curve Digital Signature Algorithm (ECDSA) (National Institute of Standards and Technology (NIST), Gaithersburg, 2012). http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf
  66. 66.
    A. K. D. S. Oliveira, J. López. Implementação em software do Esquema de Assinatura Digital de Merkle e suas variantes, in Brazilian Symposium on Information and Computer Systems Security – SBSeg 2013 (SBC, 2013)Google Scholar
  67. 67.
    A. Otmani, J.-P. Tillich, L. Dallot, Cryptanalysis of two McEliece cryptosystems based on quasi-cyclic codes. Math. Comput. Sci. 3(2), 129–140 (2010)CrossRefMATHMathSciNetGoogle Scholar
  68. 68.
    J. Patarin, The oil and vinegar signature scheme, in Dagstuhl Workshop on Cryptography (1997). TransparenciesGoogle Scholar
  69. 69.
    J. Patarin, L. Goubin, Trapdoor one-way permutations and multivariate polynomials, in ICICS’97, vol. 1334 of Lecture Notes in Computer Science (Springer, New York, 1997), pp. 356–368Google Scholar
  70. 70.
    J. Patarin, Hidden fields equations (hfe) and isomorphisms of polynomials (ip): Two new families of asymmetric algorithms, in ed. by U. Maurer. Advances in Cryptology – EUROCRYPT ‘96, vol. 1070 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 1996), pp. 33–48Google Scholar
  71. 71.
    J. Patarin, L. Goubin, N. Courtois, Improved algorithms for isomorphisms of polynomials, in Advances in Cryptology – EUROCRYPT ‘98 (Springer, New York, 1998), pp. 184–200CrossRefGoogle Scholar
  72. 72.
    N.J. Patterson, The algebraic decoding of Goppa codes. IEEE Trans. Inf. Theory 21(2), 203–207 (1975)CrossRefMATHGoogle Scholar
  73. 73.
    C. Peikert, Public-key cryptosystems from the worst-case shortest vector problem: extended abstract, in Proceedings of the 41st Annual ACM Symposium on Theory of Computing, STOC ‘09 (ACM, New York, 2009), pp. 333–342Google Scholar
  74. 74.
    A. Petzoldt, S. Bulygin, J. Buchmann, CyclicRainbow – a multivariate signature scheme with a partially cyclic public key, in ed. by G. Gong, K. Gupta. Progress in Cryptology – Indocrypt 2010, vol. 6498 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2010), pp. 33–48Google Scholar
  75. 75.
    A. Petzoldt, S. Bulygin, J. Buchmann, Selecting parameters for the Rainbow signature scheme, in ed. by N. Sendrier Post-Quantum Cryptography – PQCrypto 2010, vol. 6061 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2010), pp. 218–240. Extended Version: http://eprint.iacr.org/2010/437
  76. 76.
    A. Petzoldt, S. Bulygin, J. Buchmann, Linear recurring sequences for the UOV key generation, in International Conference on Practice and Theory in Public Key Cryptography – PKC 2011, vol. 6571 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2011), pp. 335–350Google Scholar
  77. 77.
    A. Petzoldt, S. Bulygin, J. Buchmann, Cyclicrainbow - a multivariate signature scheme with a partially cyclic public key, in ed. by G. Gong, K.C. Gupta. INDOCRYPT, volume 6498 of Lecture Notes in Computer Science (Springer, New York, 2010), pp. 33–48Google Scholar
  78. 78.
    B. Preneel, Analysis and design of cryptographic hash functions. PhD thesis, Katholieke Universiteit Leuven, 1983Google Scholar
  79. 79.
    L. Rausch, A. Hlsing, J. Buchmann, Optimal parameters for \(xmss^{\mathrm{MT}}\), in CD-ARES 2013, vol. 8128 of Lecture Notes in Computer Science (Springer, New York, 2013), pp. 194–208Google Scholar
  80. 80.
    O. Regev, The learning with errors problem (invited survey), in IEEE Conference on Computational Complexity (IEEE Computer Society, Washington, DC, 2010), pp. 191–204Google Scholar
  81. 81.
    R.L. Rivest, A. Shamir, L. Adleman, A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM 21, 120–126 (1978)CrossRefMATHMathSciNetGoogle Scholar
  82. 82.
    A. Sahai, B. Waters, Attribute-based encryption for circuits from multilinear maps. CoRR, abs/1210.5287 (2012)Google Scholar
  83. 83.
    N. Sendrier, Decoding one out of many, in ed. by B-Y. Yang. Post-Quantum Cryptography – PQCrypto 2011, vol. 7071 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2011), pp. 51–67. 10.1007/978-3-642-25405-5-4Google Scholar
  84. 84.
    P.W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput. 26, 1484–1509 (1997)CrossRefMATHMathSciNetGoogle Scholar
  85. 85.
    A. Shoufan, N. Huber, H. Molter, A novel cryptoprocessor architecture for chained merkle signature scheme, in Microprocessors and Microsystems (Elsevier, Amsterdam, 2011), pp. 34–47Google Scholar
  86. 86.
    D. Stehlé, R. Steinfeld, Making ntru as secure as worst-case problems over ideal lattices, in Proceedings of the 30th Annual International Conference on Theory and Applications of Cryptographic Techniques: Advances in Cryptology, EUROCRYPT’11 (Springer, Berlin, Heidelberg, 2011), pp. 27–47Google Scholar
  87. 87.
    J. Stern, A method for finding codewords of small weight. Coding Theory Appl. 388, 106–133 (1989)CrossRefGoogle Scholar
  88. 88.
    J. Stern, Can one design a signature scheme based on error-correcting codes? in Advances in Cryptology – ASIACRYPT’94, vol. 917 of Lecture Notes in Computer Science (Springer, New York, 1994), pp. 426–428Google Scholar
  89. 89.
    M. Szydlo, Merkle tree traversal in log space and time, in Advances in Cryptology – Eurocrypt 2004, vol. 3027 of Lecture Notes in Computer Science (Springer, New York, 2004), pp. 541–554Google Scholar
  90. 90.
    R.M. Tanner, Spectral graphs for quasi-cyclic LDPC codes, in IEEE International Symposium on Information Theory – ISIT 2001 (IEEE, Washington, DC, 2001), p. 226Google Scholar
  91. 91.
    E. Thomae, A generalization of the Rainbow band separation attack and its applications to multivariate schemes. Cryptology ePrint Archive, Report 2012/223, 2012. http://eprint.iacr.org/2012/223.
  92. 92.
    C. Wieschebrink, Two NP-complete problems in coding theory with an application in code based cryptography, in IEEE International Symposium on Information Theory – ISIT 2006 (IEEE, Seattle, 2006), pp. 1733–1737Google Scholar
  93. 93.
    R.S. Winternitz, Producing a one-way hash function from DES, in Advances in Cryptology – CRYPTO ‘83 (Springer, New York, 1983), pp. 203–207Google Scholar
  94. 94.
    C. Wolf, B. Preneel, Taxonomy of public key schemes based on the problem of multivariate quadratic equations. IACR Cryptology ePrint Archive 2005, 77 (2005)Google Scholar
  95. 95.
    T. Yasuda, K Sakurai, T. Takagi, Reducing the key size of Rainbow using non-commutative rings, in Topics in Cryptology – CT-RSA 2012, vol. 7178 of Lecture Notes in Computer Science (Springer, New York, 2012), pp. 68–83Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Paulo S. L. M. Barreto
    • 1
  • Felipe Piazza Biasi
    • 1
  • Ricardo Dahab
    • 2
  • Julio César López-Hernández
    • 2
  • Eduardo M. de Morais
    • 2
  • Ana D. Salina de Oliveira
    • 2
  • Geovandro C. C. F. Pereira
    • 1
  • Jefferson E. Ricardini
    • 1
  1. 1.Escola PolitécnicaUniversity of Sãao PauloSão Paulo (SP)Brazil
  2. 2.Instituto de ComputaçãoUniversity of CampinasCampinas (SP)Brazil

Personalised recommendations