Skip to main content

Analysis of Information Set Decoding for a Sub-linear Error Weight

  • Conference paper
  • First Online:
Post-Quantum Cryptography (PQCrypto 2016)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 9606))

Included in the following conference series:

Abstract

The security of code-based cryptography is strongly related to the hardness of generic decoding of linear codes. The best known generic decoding algorithms all derive from the Information Set Decoding algorithm proposed by Prange in 1962. The ISD algorithm was later improved by Stern in 1989 (and Dumer in 1991). Those last few years, some significant improvements have occurred. First by May, Meurer, and Thomae at Asiacrypt 2011, then by Becker, Joux, May, and Meurer at Eurocrypt 2012, and finally by May and Ozerov at Eurocrypt 2015. With those methods, correcting w errors in a binary linear code of length n and dimension k has a cost \(2^{cw(1+\mathbf {o}(1))}\) when the length n grows, where c is a constant, depending of the code rate k / n and of the error rate w / n. The above ISD variants have all improved that constant c when they appeared.

When the number of errors w is sub-linear, \(w=\mathbf {o}(n)\), the cost of all ISD variants still has the form \(2^{cw(1+\mathbf {o}(1))}\). We prove here that the constant c only depends of the code rate k / n and is the same for all the known ISD variants mentioned above, including the fifty years old Prange algorithm. The most promising variants of McEliece encryption scheme use either Goppa codes, with \(w=\mathbf {O}(n/\log (n))\), or MDPC codes, with \(w=\mathbf {O}(\sqrt{n})\). Our result means that, in those cases, when we scale up the system parameters, the improvement of the latest variants of ISD become less and less significant. This fact has been observed already, we give here a formal proof of it. Moreover, our proof seems to indicate that any foreseeable variant of ISD should have the same asymptotic behavior.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The results have been obtained independently, Dumer’s variant is slightly better than Stern’s, though by a very small amount.

  2. 2.

    \(h^{-1}(1-R)\) is the asymptotic Gilbert-Varshamov bound, \(h(x)=-x\log _2x-(1-x)\log _2(1-x)\) is the binary entropy.

References

  1. McEliece, R.: A public-key cryptosystem based on algebraic coding theory. In: DSN Progress Report, pp. 114–116. Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, January 1978

    Google Scholar 

  2. Misoczki, R., Tillich, J.P., Sendrier, N., Barreto, P.S.L.M.: MDPC-McEliece: newMcEliece variants from moderate density parity-check codes. In: IEEE Conference, ISIT 2013, Instanbul, Turkey, pp. 2069–2073, July 2013

    Google Scholar 

  3. May, A., Ozerov, I.: On computing nearest neighbors with applications to decoding of binary linear codes. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9056, pp. 203–228. Springer, Heidelberg (2015)

    Google Scholar 

  4. Prange, E., Ozerov, I.: The use of information sets in decoding cyclic codes. IRE Trans. IT–8, S5–S9 (1962)

    Google Scholar 

  5. Berlekamp, E., McEliece, R., van Tilborg, H.: On the inherent intractability of certain coding problems. IEEE Trans. Inform. Theory 24(3), 384–386 (1978)

    Article  MATH  Google Scholar 

  6. Alekhnovich, M.: More on average case vs approximation complexity. In: FOCS 2003, pp. 298–307. IEEE (2003)

    Google Scholar 

  7. Lee, P.J., Brickell, E.F.: An observation on the security of McEliece’s public-key cryptosystem. In: Günther, C.G. (ed.) EUROCRYPT 1988. LNCS, vol. 330, pp. 275–280. Springer, Heidelberg (1988)

    Google Scholar 

  8. Stern, J.: A method for finding codewords of small weight. In: Cohen, G., Wolfmann, J. (eds.) Coding Theory and Applications. LNCS, vol. 388, pp. 106–113. Springer, Heidelberg (1989)

    Chapter  Google Scholar 

  9. Dumer, I.: On minimum distance decoding of linear codes. In: Proceedings of 5th Joint Soviet-Swedish International Workshop on Information Theory, Moscow, pp. 50–52 (1991)

    Google Scholar 

  10. Canteaut, A., Chabaud, F.: A new algorithm for finding minimum-weight words in a linear code: application to McEliece’s cryptosystem and to narrow-sense BCH codes of length 511. IEEE Trans. Inf. Theory 44(1), 367–378 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  11. Finiasz, M., Sendrier, N.: Security bounds for the design of code-based cryptosystems. In: Matsui, M. (ed.) ASIACRYPT 2009. LNCS, vol. 5912, pp. 88–105. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  12. Lange, T., Peters, C., Bernstein, D.J.: Smaller decoding exponents: ball-collision decoding. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 743–760. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  13. May, A., Meurer, A., Thomae, E.: Decoding random linear codes in \(\tilde{\cal O}(2^{0.054n})\). In: Lee, D.H., Wang, X. (eds.) ASIACRYPT 2011. LNCS, vol. 7073, pp. 107–124. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  14. Becker, A., Joux, A., May, A., Meurer, A.: Decoding random binary linear codes in 2\(^{n/20}\): how 1 + 1 = 0 improves information set decoding. In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT 2012. LNCS, vol. 7237, pp. 520–536. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  15. Alekhnovich, M.: More on average case vs approximation complexity. Comput. Complex. 20(4), 755–786 (2011)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rodolfo Canto Torres .

Editor information

Editors and Affiliations

Appendices

A Proof of Proposition 1

Proof

(of Proposition 1 ). We consider the execution of generic_isd (Fig. 2) and use the corresponding notations. If the input \((H_0,s_0,w)\) verify Assumption 1 then so does (Hsw) inside any particular execution of the main loop.

  1. 1.

    From the assumption, as long as we wish to estimate the cost up to a constant factor, we may assume that there is a unique solution to our problem. In one particular loop, we can only find an error pattern \((e'',e')\) such that its first \(n-k-\ell \) bits have weight \(w-p\) and its last \(k+\ell \) have weight p. This happens with probability at most \(P=\frac{{n-k-\ell \atopwithdelims ()w-p}{k+\ell \atopwithdelims ()p}}{{n\atopwithdelims ()w}}\). Thus we expect to execute the main loop, and thus instruction “1:”, at least 1 / P times.

  2. 2.

    To estimate the number of times we have to compute instruction “2:”, we need to estimate for any \(e'\in \mathbf {F}_{2}^{k+\ell }\) of weight p the probability that \(e'\) leads to a success given that \(e'H'^T=s'\).

    If we fix H, the sample space in which we compute the probabilities is \(\varOmega _H=\{eH^T\mid \mathrm {wt}(e)=w\}\) equipped with a uniform distribution (because of Assumption 1). We consider the two events

    • \(\mathcal {S}_H(e')=\{s=(s'',e'H'^T)\in \varOmega _H\}\),

    • \(\mathrm {Succ}_H(e')=\{eH^T\mid e=(e'',e'), e''\in \mathbf {F}_{2}^{n-k-\ell }, \mathrm {wt}(e'')=w-p\}\).

    The probability we are interested in is \(\Pr _{\varOmega _H}(\mathrm {Succ}_H(e')\mid \mathcal {S}_H(e'))\). We have \(\Pr _{\varOmega _H}(\mathcal {S}_H(e'))\approx 2^{-\ell }\) because we expect the set \(\varOmega _H\) to behave like a set of random vector (true for almost all matrix H). And the set \(\mathrm {Succ}_H(e')\subset \varOmega _H\) has cardinality \({n-k-\ell \atopwithdelims ()w-p}\) as it contains for a fixed \(e'\), as many elements as we have vectors \(e''\in \mathbf {F}_{2}^{n-k-\ell }\) of weight \(w-p\). Finally

    $$\begin{aligned} \Pr _{\varOmega _H}(\mathrm {Succ}_H(e')\mid \mathcal {S}_H(e')) = \frac{\Pr _{\varOmega _H}(\mathrm {Succ}_H(e'))}{\Pr _{\varOmega _H}(\mathcal {S}_H(e'))} = \frac{{n-k-\ell \atopwithdelims ()w-p} 2^\ell }{{n\atopwithdelims ()w}} \end{aligned}$$

    The second part of the statement follows.    \(\square \)

Proof

(of Corollary 1 ). Using the fact that \({n \atopwithdelims ()w}\) is proportional to \(\frac{2^{nh(w/n)}}{\sqrt{w(1-w/n)}}\), where \(h(x)=-x\log _2(x)-(1-x)\log _2(1-x)\) is the binary entropy function, we easily obtain that

$$\begin{aligned} \log _2 \frac{{(k+\ell )/2\atopwithdelims ()(1-a)p}{k+\ell \atopwithdelims ()ap}}{{k+\ell \atopwithdelims ()p}} = (k+\ell )\left( \frac{h(2(1-a)x)}{2}+h(ax)-h(x)\right) (1+\mathbf {o}(x)) \end{aligned}$$

where \(x=p/(k+\ell )\). An easy study of the above function proves that it is positive for any a, \(1>a\ge 0.5\). Using Proposition 1, if \(L\ge {(k+\ell )/2\atopwithdelims ()(1-a)p}\) then the total contribution of instruction “1:” is at least

$$\begin{aligned} \frac{{n\atopwithdelims ()w}L}{{n-k-\ell \atopwithdelims ()w-p}{k+\ell \atopwithdelims ()p}} \ge \frac{{n\atopwithdelims ()w}{(k+\ell )/2\atopwithdelims ()(1-a)p}}{{n-k-\ell \atopwithdelims ()w-p}{k+\ell \atopwithdelims ()p}} \ge \frac{{n\atopwithdelims ()w}}{{n-k-\ell \atopwithdelims ()w-p}{k+\ell \atopwithdelims ()ap}}. \end{aligned}$$

Adding to that the contribution of “2:”, we obtain the lower bound of the statement.    \(\square \)

B Proofs of Main Theorem Section

Proof

(of Lemma 1 ). We have

$$\begin{aligned} \log \big (B_a(\ell ,p)\big )\approx \max \Bigg \{\log \bigg (\frac{\left( {\begin{array}{c}n\\ k\end{array}}\right) }{\left( {\begin{array}{c}n-k-\ell \\ w-p\end{array}}\right) \left( {\begin{array}{c}k+\ell \\ ap\end{array}}\right) }\bigg ),\log \bigg (\frac{\left( {\begin{array}{c}n\\ k\end{array}}\right) }{\left( {\begin{array}{c}n-k-\ell \\ w-p\end{array}}\right) 2^\ell }\bigg )\Bigg \}. \end{aligned}$$

We divide our function in two parts

$$\begin{aligned} f(\ell ,p)=\log \bigg (\frac{\left( {\begin{array}{c}n\\ k\end{array}}\right) }{\left( {\begin{array}{c}n-k-\ell \\ w-p\end{array}}\right) \left( {\begin{array}{c}k+\ell \\ ap\end{array}}\right) }\bigg )\quad \text{ and } \quad g(\ell ,p)=\log \bigg (\frac{\left( {\begin{array}{c}n\\ k\end{array}}\right) }{\left( {\begin{array}{c}n-k-\ell \\ w-p\end{array}}\right) 2^\ell }\bigg ). \end{aligned}$$

The function \(B_a\) is defined on the domain shown in Fig. 6. Our goal is to show that there is a point \((\hat{\ell },\hat{p})\in \mathcal { D}\) such that \(f(\hat{\ell },\hat{p})=g(\hat{\ell },\hat{p}) \) who minimizes \(B_a\). We will start by studying the interior of \(\mathcal { D}\) and we will verify that if \(B_a\) achieve its minimum at \((\ell ^*,p^*)\) then there is a point \((\hat{\ell },\hat{p})\in \mathcal{V}\) which \(B_a\) has the same value. Secondly, we will search all possible minimum points in the boundary \(\partial \mathcal { D}\) and we will show that again the minimum is attained in a point of \(\mathcal{V}\); these two cases will allow us to conclude this theorem.

We suppose \((\ell ^*,p^*)\notin \partial \mathcal { D}\) minimizes \(B_a\) and it holds that \(f(\ell ^*,p^*)>g(\ell ^*,p^*)\). So, \(B_a(\ell ,p)=\max \{f(\ell ,p),g(\ell ,p)\}\) for all \((\ell ,p)\) in a neighborhood U of \((\ell ^*,p^*)\) which does not intercept the boundary. Then,

$$\begin{aligned} \min _{(\ell ,p)\in \mathcal { D}} B_a(\ell ,p)=\min _{(\ell ,p)\in U} B_a(\ell ,p)=\displaystyle \min _{(\ell ,p)\in U}f(\ell ,p), \end{aligned}$$

and in particular \(\nabla f(\ell ^*,p^*)=(0,0)\). Since \(a\in \,]0,1\,[\) , that equality has a unique solution

$$\begin{aligned} \frac{w-p^*}{n-k-\ell ^*}=0 \text{ or } 1. \end{aligned}$$

That means \((\ell ,p)\in \partial \mathcal { D}\), so this case is impossible.

In the case where \(g(\ell ^*,p^*)>f(\ell ^*,p^*)\), we deduce similarly \(\nabla g=(0,0)\). And, we obtain

$$\begin{aligned} \frac{\partial g}{\partial \ell }= & {} -\log \Big (1-\frac{w-p^*}{n-k-\ell ^*}\Big )-1=0.\\ \frac{\partial g}{\partial p}= & {} h^\prime \Big (\frac{w-p^*}{n-k-\ell ^*}\Big )=0 \end{aligned}$$

Therefore,

$$\begin{aligned} \mathscr {L}^*:\quad \frac{w-p^*}{n-k-\ell ^*}=\frac{1}{2}, \end{aligned}$$

this equation defines a line in the plane where \(g(\ell ,p)\) is constant. We use the function \(p_0:\,[\,n-k-2w,n-k\,]\rightarrow \mathbb {R}\) defined by \(p_0(\ell )=w-\frac{n-k-\ell }{2}\) to describe some points in \(\mathscr {L}^*\). Now, our objective is to show there is a point belonging to \(\mathcal{V}\) in this line \(\mathscr {L}^*\). By hypothesis \(\ell _0=n-k-2w>0\), so \((\ell _0,p_0(\ell _0))=(\ell _0,0)\in \mathscr {L}^*\cap \,\mathcal { D}\) and

$$\begin{aligned} g(\ell _0,p_0(\ell _0))=nh\big (\frac{w}{n}\big )-(n-k-\ell _0)-\ell _0<nh\big (\frac{w}{n}\big )-(n-k-\ell _0)=f(\ell _0,p_0(\ell _0)). \end{aligned}$$

Since \(g(\ell ^*,p_0(\ell ^*))> f(\ell ^*,p_0(\ell ^*))\) and the segment of line between \((\ell _0,p_0(\ell _0))\) and \((\ell ^*,p_0(\ell ^*))\) belongs to \(\mathcal D\), there is a \(\hat{\ell }\in \,]\ell _0,\ell ^*[\) such that \(g(\hat{\ell },p_0(\hat{\ell }))= f(\hat{\ell },p_0(\hat{\ell }))\). Because \(g(\hat{\ell },p_0(\hat{\ell }))=g(\ell ^*,p(\ell ^*))\), we conclude that \((\hat{\ell },p_0(\hat{\ell }))\) is a minimum point for \(B_a\) and it belongs to \(\mathcal{V}\).

Fig. 6.
figure 6

Definition domain of function \(B_a\)

Now, we suppose that the minimum point \((\ell ^*,p^*)\) belongs to the boundary \(\partial \mathcal { D}\) and we search all possibles candidates in the boundary. We can divide the boundary into 5 segments of line and we analyze the monotony of f and g respect to \(\ell \) or p. We will obtain that

$$\begin{aligned} \min _{\partial \mathcal { D}} f(\ell ,p)=\min \{f(0,0),g(0,0),f(k/a,0),\max \{f(n-k,w),g(n-k,w)\}\}; \end{aligned}$$

Since \(f(0,0)=g(0,0)\le g(k/a,0)=f(k/a,0)\), we focus our analysis on the points (0, 0) and \((n-k,w)\), so it is enough to study the case \((\ell ^*,p^*)=(n-k,w)\). We can analyze f over the line

$$\begin{aligned} \mathscr {L}_{m_0}:\frac{w-p}{n-k-\ell }=\frac{w}{n-k}=\frac{p}{k+\ell }. \end{aligned}$$

We can obtain a derivate function

$$\begin{aligned} \frac{\partial f}{\partial \ell }=h\Big (\frac{w}{n}\Big )-\bigg (-a\frac{w}{n}\log \Big (a\frac{w}{n}\Big )-\Big (1-a\frac{w}{n}\Big )\log \Big (1-a\frac{w}{n}\Big )\bigg )=h\Big (\frac{w}{n}\Big )-h\Big (a\frac{w}{n}\Big )>0, \end{aligned}$$

because \(w/n<1/2\). Therefore, B is increasing respect to \(\ell \) into this line, and B does not achieve its local minimum at \((n-k,w)\); for that reason that minimum is not \(B_a(n-k,w)\) in the case \(f(n-k,w)>g(n-k,w)\).

In the case of \(B_a(\ell ^*,p^*)=g(n-k,w)>f(n-k,w)\), we can take any point \((\ell ^{**},p^{**})\) who belongs to the interior of \(\mathcal { D}\) and the line \(\mathscr {L}^*\) (which we used before), and we obtain again another point \((\hat{\ell },\hat{p})\in \mathcal{V}\) as before.    \(\square \)

Proof

(of Lemma 2 ). We take binary logarithm over the third hypothesis and we obtain

$$\begin{aligned} \frac{\ell }{n}= & {} \frac{p}{n}\bigg (1-\log \Big (\frac{p}{k+\ell }\Big )-\mathbf {O} \Big (\frac{p}{k+\ell }\Big ) \bigg )+\frac{p}{n}\\\le & {} \frac{w}{n}\bigg (2+\mathbf {O}\Big (\frac{w}{k}\Big )\bigg )-\frac{w}{n}\log \Big (\frac{w}{k}\Big )\\= & {} \frac{w}{n}\bigg (2-\log \Big (\frac{n}{k}\Big )+\frac{n}{k}\mathbf {O}\Big (\frac{w}{n}\Big )\bigg )-\frac{w}{n}\log \Big (\frac{w}{n} \Big ) \\= & {} \mathbf {o}(1)-\mathbf {o}(1). \end{aligned}$$

So we conclude \(\frac{\ell }{n}=\mathbf {o}(1). \)    \(\square \)

Proof

(of Lemma 3 ). It is enough that we analyze the quotient of the respectives terms in these expressions:

$$\begin{aligned} \log \Bigg (\frac{\left( {\begin{array}{c}n-k-\ell \\ w-p\end{array}}\right) }{\left( {\begin{array}{c}n-k\\ w-p\end{array}}\right) }\Bigg )= & {} (n-k-\ell )h\Big (\frac{w-p}{n-k-\ell }\Big )-(n-k)h\Big (\frac{w-p}{n-k}\Big )\\= & {} -(w-p)\bigg (\log \Big (\frac{w-p}{n-k-\ell }\Big )+\Big (1+\mathbf {O}\big (\frac{w-p}{n-k-\ell }\big )\Big )\bigg )\\&+(w-p)\bigg (\log \Big (\frac{w-p}{n-k}\Big )+\Big (1+\mathbf {O}\big (\frac{w-p}{n-k}\big )\Big )\bigg )\\\le & {} (w-p)\bigg (\log \Big (1-\frac{\ell }{n-k}\Big )+\mathbf {O}\Big (\frac{w-p}{n-k-\ell }\Big )\bigg )\\\le & {} w\bigg (\log \Big (1-\frac{\ell }{n}\Big )+\frac{n}{n-k-\ell }\mathbf {O}\Big (\frac{w}{n}\Big )\bigg )\\= & {} w\mathbf {o}(1) \end{aligned}$$

In the same way, we obtain

$$\begin{aligned} \log \Bigg (\frac{\left( {\begin{array}{c}k+\ell \\ ap\end{array}}\right) }{\left( {\begin{array}{c}k\\ ap\end{array}}\right) }\Bigg )\le ap\bigg (\log \Big (1+\frac{\ell }{k}\Big )+\frac{n}{k+\ell }\mathbf {O}\Big (\frac{w}{n}\Big ) \bigg )\le w\mathbf {o}(1). \end{aligned}$$

   \(\square \)

Proof

(of Lemma 4 ).

We analyze the derivate of b:

$$\begin{aligned} b_a^\prime (p)=a\log \Big (\frac{ap}{k-ap}\Big )-\log \Big (\frac{w-p}{n-k-(w-p)}\Big ). \end{aligned}$$

We can see the function \(b_a\) is decreasing in a neighborhood of \(p=0\) and increasing in a neighborhood of \(p=w\). Moreover, \(b_a^{\prime \prime }(p)>0\), so \(b_a(p)\) is a convex function and the minimization problem has unique solution \(\hat{p}\). We analyze the equation \(b_a(\hat{p})=0\), we obtain

$$\begin{aligned} \frac{a^a\hat{p}}{w-\hat{p}}=\frac{(k-\hat{p})^a\hat{p}^{1-a}}{n-k-(w-\hat{p})} \end{aligned}$$

That implies

$$\begin{aligned} a^a\frac{\hat{p}}{w}\le \frac{k^aw^{1-a}}{n-k-w}. \end{aligned}$$

We deduce that \(\frac{\hat{p}}{w}=\mathbf {O}(\frac{w}{n}^{1-a})\).    \(\square \)

Now, we have all the asymptotic properties and reductions that we need to prove our principal result. So, we will use the well known Stirling’s approximation for binomial coefficient

$$\begin{aligned} \left( {\begin{array}{c}n\\ w\end{array}}\right) \approx \frac{2^{nh(w/n)} }{ \sqrt{w(1-w/n)} }, \end{aligned}$$

and we will ignore polynomial factors.

Proof

(of Theorem 1 ). The first estimation of \(c_a\), when \(w=\mathbf {o}(n)\), gives us

$$\begin{aligned} c_a(n,k,w)\le & {} \frac{1}{w}\log \big (B_a(0,0)\big )\\= & {} \frac{1}{w}\Big (\log \left( {\begin{array}{c}n\\ w\end{array}}\right) -\log \left( {\begin{array}{c}n-k\\ w\end{array}}\right) \Big )\\= & {} \frac{n}{w}h\bigg (\frac{w}{n}\Big )-\frac{n-k}{w}h\Big (\frac{w}{n-k}\bigg )\\= & {} \bigg (1-\log \Big (\frac{w}{n}\Big )+\mathbf {O}\Big (\frac{w}{n}\Big )\bigg )-\bigg (1-\log \Big (\frac{w}{n-k}\Big )+\mathbf {O}\Big (\frac{w}{n-k}\Big ) \bigg )\\= & {} \log \Big (\frac{n}{n-k} +\mathbf {O}\Big (\frac{w}{n}\Big ) \Big ). \end{aligned}$$

So, our objective will be show the another inequality. The Lemmas 1, 2 and 3 lets us simplify the equation to that inequality

$$\begin{aligned} c_a(n,k,w) \ge \mathbf {o}(w) +\min _{p}\log (b_a(p)). \end{aligned}$$

Finally, we analyze the binary logarithm of \(b_a\) evaluated in the optimal argument \(\hat{p}\):

$$\begin{aligned} \log (b_a(\hat{p}))=\underbrace{nh\Big (\frac{w}{n}\Big )}_\text {(1)}-\underbrace{(n-k)h\Big (\frac{w-\hat{p}}{n-k}\Big )}_\text {(2)}-\underbrace{kh\Big (\frac{a\hat{p}}{k}\Big )}_\text {(3)}. \end{aligned}$$

So, we study these three parts

$$\begin{aligned} (1): & {} nh\Big (\frac{w}{n}\Big )=w\bigg (1-\log \Big (\frac{w}{n}\Big )+\mathbf {O}\Big (\frac{w}{n}\Big ) \bigg )\\ (2): & {} (n-k)h\Big (\frac{w-\hat{p}}{n-k}\Big )=(w-\hat{p})\bigg (1-\log \Big (\frac{w-\hat{p}}{n-k}\Big )+\mathbf {O}(\frac{w-\hat{p}}{n-k}) \bigg )\\ (3): & {} kh\Big (\frac{a\hat{p}}{k}\Big )=a\hat{p}\bigg ( 1-\log \Big (\frac{a\hat{p}}{k}\Big )+\mathbf {O}\Big (\frac{a\hat{p}}{k}\Big )\bigg ) \end{aligned}$$

We group these terms in two sums: the sum of logarithms and the sum of negligible addends:

$$\begin{aligned} (I)= & {} -w\log \Big (\frac{w}{n}\Big )+(w-\hat{p})\log \Big (\frac{w-\hat{p}}{n-k}\Big )+ap\log \Big (\frac{a\hat{p}}{k}\Big )\\ (II)= & {} -w\bigg (1+\mathbf {O}\Big (\frac{w}{n}\Big )\bigg )+(w-\hat{p})\bigg (1+{\mathbf O}\Big (\frac{w-\hat{p}}{n-k}\Big )\bigg )+ap\bigg (1+{\mathbf O}\Big (\frac{a\hat{p}}{k}\Big )\bigg )\\ \end{aligned}$$

We continue with the easy part

$$\begin{aligned} (II)= & {} -w\mathbf {O}\Big (\frac{w}{n}\Big )+(a-1)\hat{p}+(w-\hat{p})\mathbf {O}\Big (\frac{w-\hat{p}}{n-k}\Big )+a\hat{p}\mathbf {O}\Big (\frac{\hat{p}}{k}\Big )\\= & {} -w\mathbf {o}(1)+(a-1)\mathbf {o}(w)+(w-\hat{p})\mathbf {o}(1)+a\mathbf {o}(w)\\= & {} \mathbf {o}(w). \end{aligned}$$

Finally,

$$\begin{aligned} (I)= & {} w\bigg (\log \Big (\frac{w-\hat{p}}{n-k}\Big )-\log \Big (\frac{w}{n}\Big )\bigg )+\hat{p}\bigg (a\log \Big (\frac{ap}{k} \Big )-\log \Big (\frac{w-\hat{p}}{n-k}\Big )\bigg )\\= & {} w\log \Big (\frac{w-\hat{p}}{w}/\frac{n-k}{n}\Big )+\hat{p}\log \Big (a^a\frac{\hat{p}^a}{w-\hat{p}}\frac{n-k}{k^a}\Big )\\= & {} w\log \Bigl (\frac{1-\hat{p}/w}{1-k/n}\Bigr )+a\hat{p}\log \Big (\frac{a\hat{p}}{w}\Bigr )+\hat{p}\log \Big (\frac{w^a}{(w-\hat{p})^a}\Big )+\hat{p}\log \Big (\frac{n-k}{k^a(w-\hat{p})^{1-a}}\Big )\\= & {} w\log \Big (\frac{1-\hat{p}/w}{1-k/n}\Big )+w\bigg (\mathbf {o}(1)+a\frac{\hat{p}}{w}\log \Big (\frac{w}{w-\hat{p}}\Big ) +\frac{\hat{p}}{w}\log \Big (\frac{(n-k)^{1-a}}{(w-\hat{p})^{1-a}}\Big )\bigg )\\= & {} w\log \Big (\frac{1-\hat{p}/w}{1-k/n}\Big )+w\bigg (\mathbf {o}(1)+(1-a)\frac{\hat{p}}{w}\log \Big (\frac{n-k}{w-\hat{p}}\Big )\bigg )\\= & {} w\log \Big (\frac{1-\hat{p}/w}{1-k/n}\Big )+w\bigg (\mathbf {o}(1)-(1-a)\frac{\hat{p}}{w}\Big (\mathbf {O}(1)+\log \big (\frac{n}{w}\big )\Big ) \bigg )\\= & {} w\log \Big (\frac{1-\hat{p}/w}{1-k/n}\Big )+w\bigg ( \mathbf {o}(1)+\frac{\hat{p}}{w}\log \Big (\frac{w}{n}^{1-a} \Big )\bigg ) \end{aligned}$$

So, the Lemma 4 implies

$$\begin{aligned} (I)=w\log \Big (\frac{1-\hat{p}/w}{1-k/n}\Big )+w\Big ( \mathbf {o}(1)+\mathbf {o}(1)\Big ). \end{aligned}$$

Finally, we conclude

$$\begin{aligned} c_a(n,k,w)\ge (I)+(II)=w\bigg (\log \Big (\frac{1}{1-R}\Big )+ \mathbf {o}(1)\bigg ). \end{aligned}$$

   \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Canto Torres, R., Sendrier, N. (2016). Analysis of Information Set Decoding for a Sub-linear Error Weight. In: Takagi, T. (eds) Post-Quantum Cryptography. PQCrypto 2016. Lecture Notes in Computer Science(), vol 9606. Springer, Cham. https://doi.org/10.1007/978-3-319-29360-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-29360-8_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-29359-2

  • Online ISBN: 978-3-319-29360-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics