Skip to main content
Log in

Improved constant-sum encodings for hash-based signatures

  • Regular Paper
  • Published:
Journal of Cryptographic Engineering Aims and scope Submit manuscript

Abstract

The Winternitz one-time signature scheme is one of the cornerstones of hash-based signatures. Cruz, Kaji, and Yatani (CKY) propose to use a constant-sum encoding function with this scheme to obtain signature verification at lower and predictable costs in exchange for increased costs of key generation and signature verification. We give a novel description of this scheme called Wots-cs that greatly reduces the costs associated with key and signature generation, as well as signature verification. We achieve this by introducing new deterministic constant-sum encoding algorithms that accept larger sets of parameters than the original proposal. In addition, we provide a security proof of our scheme that relies on weaker assumptions than the CKY variant, reducing signature sizes by \(50\%\). Finally, we compare our work with Wots+ for parameters with the same signature size, and experiment with Xmss to discuss the impact of the encoding and possible applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Alagic, G., Alperin-Sheriff, J., Apon, D., Cooper, D., Dang, Q., Kelsey, J., Liu, Y.K., Miller, C., Moody, D., Peralta, R., Perlner, R., Robinson, A., Smith-Tone, D.: Status Report on the second round of the nist post-quantum cryptography standardization process. internal report 8309, National Institute of Standards and Technology (NIST) (2020)

  2. Aumasson, J.P., Bernstein, D.J., Beullens, W., Dobraunig, C., Eichlseder, M., Fluhrer, S., Gazdag, S.L., Hülsing, A., Kampanakis, P., Kölbl, S., Lange, T., Lauridsen, M.M., Mendel, F., Niederhagen, R., Rechberger, C., Rijneveld, J., Schwabe, P., Westerbaan, B.: SPHINCS\(^{+}\). Submission to the 3rd round of the NIST post-quantum cryptography standardization process (2020). https://sphincs.org/data/sphincs+-round3-submission-nist.zip

  3. Bernstein, D.J., Hülsing, A., Kölbl, S., Niederhagen, R., Rijneveld, J., Schwabe, P.: The SPHINCS\(^{+}\) signature framework. In: Proceedings of the 2019 ACM SIGSAC conference on computer and communications security (2019)

  4. Bollinger, R.C., Burchard, C.L.: Lucas’s theorem and some related results for extended pascal triangles. Am. Math. Mon. 97(3), 198–204 (1990)

    Article  MathSciNet  Google Scholar 

  5. Boneh, D., Shen, E., Waters, B.: Strongly unforgeable signatures based on computational Diffie-Hellman. In: M. Yung, Y. Dodis, A. Kiayias, T. Malkin (eds.) Public Key Cryptography—PKC 2006, Lecture Notes in Computer Science, vol. 3958, pp. 229–240 (2006). https://doi.org/10.1007/11745853_15

  6. Bos, J.W., Hülsing, A., Renes, J., van Vredendaal, C.: Rapidly verifiable XMSS signatures. IACR Trans. Cryptogr. Hardw. Embed. Syst. 2021(1), 137–168 (2020). https://doi.org/10.46586/tches.v2021.i1.137-168

  7. Cooper, D.A., Apon, D.C., Dang, Q.H., Davidson, M.S., Dworkin, M.J., Miller, C.A.: Recommendation for Stateful Hash-Based Signature schemes, p. 208. NIST Special Publication, National Institute of Standards and Technology (NIST) (2020)

    Book  Google Scholar 

  8. Cruz, J.P., Yatani, Y., Kaji, Y.: Constant-sum fingerprinting for winternitz one-time signature. In: 2016 International Symposium on Information Theory and Its Applications (ISITA), pp. 703–707 (2016)

  9. Eger, S.: Stirling’s approximation for central extended binomial coefficients. Am. Math. Mon. 121(4), 344–349 (2014)

    Article  MathSciNet  Google Scholar 

  10. Fahssi, N.E.: Polynomial triangles revisited (2012). arxiv:1202.0228v7

  11. Goldwasser, S., Bellare, M.: Lecture notes on cryptography (2008). https://cseweb.ucsd.edu/~mihir/papers/gb.pdf

  12. Hülsing, A.: W-OTS\(^+\) – Shorter Signatures for hash-based signature schemes. In: A. Youssef, A. Nitaj, A.E. Hassanien (eds.) Progress in Cryptology—AFRICACRYPT 2013, Lecture Notes in Computer Science, vol. 7918, pp. 173–188 (2013). https://doi.org/10.1007/978-3-642-38553-7-10

  13. Hülsing, A., Butin, D., Gazdag, S.L., Rijneveld, J., Mohaisen, A.: XMSS: Extended hash-based signatures. Request for Comments 8391, Internet Engineering Task Force (2018). https://tools.ietf.org/html/rfc8391

  14. Kaji, Y., Cruz, J.P., Yatani, Y.: Hash-based signature with constant-sum fingerprinting and partial construction of hash chains. In: Proceedings of the 15th international joint conference on e-business and telecommunications, ICETE 2018—Volume 2: SECRYPT, pp. 463–470 (2018)

  15. Knuth, D.E.: The Art of Computer Programming, Volume 4A: Combinatorial Algorithms, Part 1, 1st edn. Addison–Wesley Professional (2011)

  16. Kudinov, M.A., Kiktenko, E.O., Fedorov, A.K.: Security analysis of the W-OTS\(^{+}\) signature scheme: updating security bounds (2020). arxiv:2002.07419v1

  17. Leurent, G., Peyrin, T.: SHA-1 is a shambles—first chosen-prefix collision on SHA-1 and application to the PGP web of trust. Cryptology ePrint Archive, Report 2020/014 (2020). https://eprint.iacr.org/2020/014

  18. Merkle, R.C.: A certified digital signature. In: G. Brassard (ed.) Advances in Cryptology—CRYPTO ’89, Lecture Notes in Computer Science, vol. 435, pp. 218–238 (1989). https://doi.org/10.1007/0-387-34805-0-21

  19. Perin, L.P., Zambonin, G., Martins, D.M.B., Custódio, R.F., Martina, J.E.: Tuning the Winternitz hash-based digital signature scheme. In: 2018 IEEE Symposium on Computers and Communications (ISCC), pp. 537–542 (2018)

  20. Rankl, W., Effing, W.: Smart Card Handbook, 4th edn. Wiley (2010)

  21. Roh, D., Jung, S., Kwon, D.: Winternitz signature scheme using nonadjacent forms. Secur. Commun. Netw. (2018). https://doi.org/10.1155/2018/1452457

    Article  Google Scholar 

  22. Steinwandt, R., Villányi, V.I.: A one-time signature using run-length encoding. Inf. Process. Lett. 108(4), 179–185 (2008)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lucas Pandolfo Perin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

L.P. Perin and G. Zambonin were supported by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. L. Moura and D. Panario were partially supported by NSERC of Canada.

Proofs of several combinatorial arguments, algorithms and parameter table

Proofs of several combinatorial arguments, algorithms and parameter table

1.1 Proof of Theorem 1

Theorem 1

([4, Item 1d]). Consider the set of t-tuples with elements bounded by n whose sum is s. The cardinality of this set is given by

$$\begin{aligned} |\tau _{(t, n, s)}| = \sum _{i = 0}^{k} (-1)^{i} \left( {\begin{array}{c}t\\ i\end{array}}\right) \left( {\begin{array}{c}s - (n + 1) i + t - 1\\ t - 1\end{array}}\right) , \end{aligned}$$

where \(k = \min \left( t, \left\lfloor \tfrac{s}{n + 1}\right\rfloor \right) .\)

Proof

Let s be any non-negative integer. The number of ways that t non-negative integers, smaller or equal to n, can be arranged and sum exactly to s, denoted by \(|\tau _{(t, n, s)}|\), is given by the coefficient of the term \(x^{s}\) from the polynomial

$$\begin{aligned} g(x) = (1 + x^{1} + \cdots + x^{n})^{t}. \end{aligned}$$

We can represent the same polynomial in terms of the sum of the inner geometric series as:

$$\begin{aligned} g(x) = \left( \frac{1 - x^{n + 1}}{1 - x} \right) ^{t} = (1 - x^{n + 1})^{t} \frac{1}{(1 - x)^{t}}, \end{aligned}$$

and from this we expand the binomials such that

$$\begin{aligned} g(x) = \left( \sum _{i = 0}^{t} (-1)^{i} \left( {\begin{array}{c}t\\ i\end{array}}\right) x^{(n + 1) i} \right) \left( \sum _{l = 0}^{\infty } \left( {\begin{array}{c}l + t - 1\\ t - 1\end{array}}\right) x^{l} \right) . \end{aligned}$$

However, we are only interested in the coefficient of \(x^{s}\), which can be expressed as follows:

$$\begin{aligned} |\tau _{(t, n, s)}| = \sum _{i = 0}^{t} (-1)^{i} \left( {\begin{array}{c}t\\ i\end{array}}\right) \left( {\begin{array}{c}s - (n + 1) i + t - 1\\ t - 1\end{array}}\right) . \end{aligned}$$

Finally, the maximum value of i must satisfy \(s - (n + 1) i + t - 1 \ge t - 1 \ge 0\) and \(t \ge i\). Then, since t is a positive integer, we must have that \(s - (n + 1) i \ge 0\) only when \(i \le \frac{s}{n + 1}\) and \(n \ge 0\). Thus, \(k = \min \left( t, \left\lfloor \tfrac{s}{n + 1}\right\rfloor \right) \) and

$$\begin{aligned} |\tau _{(t, n, s)}| = \sum _{i = 0}^{k} (-1)^{i} \left( {\begin{array}{c}t\\ i\end{array}}\right) \left( {\begin{array}{c}s - (n + 1) i + t - 1\\ t - 1\end{array}}\right) . \end{aligned}$$

\(\square \)

1.2 Proof of Theorem 2

We note that \(|\tau _{(t, 1, s)}| = \left( {\begin{array}{c}s\\ t\end{array}}\right) \), and the case where \(|\tau _{(t, n, s)}|\) with \(n\ge 1\) is a type of generalization of the binomial coefficients that has been well studied [4, 9, 10]. For any fixed \(n \ge 1\), together with Proposition 1, the following theorem implies the unimodality property of \(|\tau _{(t, n, s)}|\), for a fixed t. We do not claim the result is new, but in the absence of locating a proof of the exact result, we give an inductive proof.

Theorem 2

Let \(n \ge 1\), \(t \ge 1\) and \(0 \le s \le \left\lceil \frac{tn}{2} \right\rceil \). For any \(0 \le j \le s\) we have

$$\begin{aligned} |\tau _{(t, n, j)}| \le |\tau _{(t, n, s)}|. \end{aligned}$$

Proof

We prove the proposition by induction on t. To simplify calculations, we extend the definition of \(\tau \) such that \(|\tau _{(t, n, s)}| = 0\) for \(s < 0\) and \(s > tn\). We prove the result for \(j \in \mathbb {Z}, j \le s\). To greatly improve readability, we set \(r = \frac{n}{2}\).

The base case is \(t = 1\). For any \(0 \le j < s \le \lceil {tr}\rceil \), we have \(|\tau _{(1, n, s)}| = |\tau _{(1, n, j)}| = 1\), and whenever \(j < 0\) we have \(|\tau _{(t, n, j)}| = 0\), so the inequality holds for \(t = 1\).

For the inductive step, we let \(t \ge 2\), \(s \le \lceil {tr}\rceil \) and assume the main statement holds for \(t' = t - 1\), and any \(j'\) and \(s'\) such that \(0 \le s' \le \lceil {(t - 1)r}\rceil \) and \(j' < s'\). It is enough to show that

$$\begin{aligned} |\tau _{(t, n, s - 1)}| \le |\tau _{(t, n, s)}|, \end{aligned}$$
(4)

since the repeated application of this equation extends the result for any \(j < s\). Recalling that \(|\tau _{(t, n, s)}| = \sum _{j = 0}^{n} |\tau _{(t - 1, n, s - j)}|\) by Eq. (1), and letting \(\ell = \lceil {tr}\rceil - s\), we rewrite this equation in terms of \(\ell \), which gives

$$\begin{aligned} |\tau _{(t, n, s)}| = | \tau _{(t, n, \lceil {tr}\rceil - \ell )} | = \sum _{i = -\lfloor {r}\rfloor }^{\lceil {r}\rceil } | \tau _{(t - 1, n, \lfloor {(t - 1)r}\rfloor - \ell + i)} |. \end{aligned}$$

The above equality can be verified by carefully analyzing the four cases where n and t assume even or odd values. Now, we write the above equation with \(s - 1\) in place of s

$$\begin{aligned} |\tau _{(t, n, s - 1)}| {=} | \tau _{(t, n, \lceil {tr}\rceil - \ell - 1)} | {=} \sum _{i = -\lfloor {r}\rfloor }^{\lceil {r}\rceil } | \tau _{(t {-} 1, n, \lfloor {(t {-} 1)r}\rfloor - \ell + i - 1)} |. \end{aligned}$$

Thus, we get that \(|\tau _{(t, n, s - 1)}| \le |\tau _{(t, n ,s)}|\) if and only if

$$\begin{aligned} |\tau _{(t - 1, n, \lfloor {(t - 1)r}\rfloor - \ell - \lfloor {r}\rfloor - 1)} | \le | \tau _{(t - 1, n, \lfloor {(t - 1)r}\rfloor - \ell + \lceil {r}\rceil )} |. \end{aligned}$$
(5)

If \(\ell \ge \lceil {r}\rceil \) then \(\lfloor {(t - 1)r}\rfloor - \ell + \lceil {r}\rceil \le \lceil {(t - 1)r}\rceil \) and by the induction hypothesis Eq. (5) holds. If \(\ell < \lceil {r}\rceil \) we apply Proposition 1 to find that

$$\begin{aligned} | \tau _{(t - 1, n, \lfloor {(t - 1)r}\rfloor - \ell + \lceil {r}\rceil )} | = | \tau _{(t - 1, n, \lceil {(t - 1)r}\rceil + \ell - \lceil {r}\rceil )} |, \end{aligned}$$

and thus \(\lfloor {(t - 1)r}\rfloor - \ell - \lfloor {r}\rfloor - 1 \le \lceil {(t - 1)r}\rceil + \ell - \lceil {r}\rceil \le \lceil {(t - 1)r}\rceil \). From this, we apply the induction hypothesis so that

$$\begin{aligned} | \tau _{(t - 1, n, \lfloor {(t - 1)r}\rfloor - \ell - \lfloor {r}\rfloor - 1)} |&\le |\tau _{(t - 1, n, \lfloor {(t - 1)r}\rfloor - \ell + \lceil {r}\rceil )} | \\&= |\tau _{(t - 1, n, \lceil {(t - 1)r}\rceil + \ell - \lceil {r}\rceil )} |. \end{aligned}$$

This completes the proof of Eq. (5) that implies Eq. (4) and concludes the proof. \(\square \)

1.3 Proof of Proposition 3

Proposition 3

For any \(0 \le l \le n\),

$$\begin{aligned}&|\tau _{(t, n, s)}^{(l)}| {=} \sum _{i = 0}^{k} (-1)^{i} \left( {\begin{array}{c}t - 1\\ i\end{array}}\right) \times \\&\quad \left[ \left( {\begin{array}{c}s - (n {+} 1)i {+} t - 1\\ t - 1\end{array}}\right) {-} \left( {\begin{array}{c}s {-} (n {+} 1)i + t - 2 - l\\ t - 1\end{array}}\right) \right] ,\! \end{aligned}$$

where \(k = \min \left( t, \lfloor {*}\rfloor {\tfrac{s}{n + 1}} \right) \).

Proof

We recall that the summation in Theorem 1 has an upper bound k due to the first binomial coefficient evaluating to zero for values \(i > k\). Then, from Proposition 2 and by letting

$$\begin{aligned} k = \max _{j \in \{0, \dots , n\}} \left( \min \left( t, \left\lfloor \tfrac{s - j}{n + 1}\right\rfloor \right) \right) = \min \left( t, \left\lfloor \tfrac{s}{n + 1}\right\rfloor \right) , \end{aligned}$$

we can express the cardinality \(|\tau _{(t, n, s)}^{(l)}|\) as

$$\begin{aligned}&\sum _{j = 0}^{l} \sum _{i = 0}^{k} (-1)^{i} \left( {\begin{array}{c}t - 1\\ i\end{array}}\right) \left( {\begin{array}{c}s - (n + 1)i + t - 2 - j\\ t - 2\end{array}}\right) \\&\quad = \sum _{i = 0}^{k} (-1)^{i} \left( {\begin{array}{c}t - 1\\ i\end{array}}\right) \left[ \sum _{j = 0}^{l} \left( {\begin{array}{c}s - (n + 1)i + t - 2 - j\\ t - 2\end{array}}\right) \right] . \end{aligned}$$

For any \(0 \le j \le l\), we let \(\alpha = s - (n + 1)i\) and \(\beta = t - 2\). Then, the inner summation can be simplified as:

$$\begin{aligned} \sum _{j = 0}^{l} \left( {\begin{array}{c}\alpha + \beta - j\\ \beta \end{array}}\right)&= \sum _{j = 0}^{\alpha + \beta } \left( {\begin{array}{c}j\\ \beta \end{array}}\right) - \sum _{j = 0}^{\alpha + \beta - l - 1} \left( {\begin{array}{c}j\\ \beta \end{array}}\right) \\&= \left( {\begin{array}{c}\alpha + \beta + 1\\ \beta + 1\end{array}}\right) - \left( {\begin{array}{c}\alpha + \beta - l\\ \beta + 1\end{array}}\right) . \end{aligned}$$

Substituting the values of \(\alpha \) and \(\beta \) yields the proof. \(\square \)

1.4 Proof of Proposition 4

Proposition 4

Let m and t be integers such that \(t=o(m)\), and let n be such that

$$\begin{aligned} |\tau _{(t,n-1, \lfloor \frac{(n - 1)t}{2}\rfloor )}| < 2^m \le |\tau _{(t,n, \lfloor \frac{nt}{2}\rfloor )}|. \end{aligned}$$

Let \(\tau ^m_{(t,n,\lfloor \frac{nt}{2}\rfloor )}\) be the image of the encoding \(\mathbf {E}(\{0,1\}^m) \subseteq \tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}\). Consider the sequence of independent and uniformly distributed random variables \(X=X_m\) from \(\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}\), and the sequence of independent and uniformly distributed random variables \(Y=Y_m\) from \(\tau ^m_{(t,n,\lfloor \frac{nt}{2}\rfloor )}\). Then, X and Y are statistically indistinguishable.

Proof

We need to prove that for every \(D>0\) there exists \(m_0\), such that:

\( \frac{1}{2} \sum _{\alpha \in \tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}} |\Pr [X_m=\alpha ]-\Pr [Y_m=\alpha ]| < \frac{1}{m^D},\) for all \(m\ge m_0\).

The left-hand side of the inequality is equal to

$$\begin{aligned}&\frac{1}{2} \left( \sum _{\alpha \in \tau ^m_{(t,n,\lfloor \frac{nt}{2}\rfloor )}}\left| \frac{1}{2^m} - \frac{1}{|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}| }\right| \right. \nonumber \\&\quad + \left. \sum _{\alpha \in (\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}\setminus \tau ^m_{(t,n,\lfloor \frac{nt}{2}\rfloor )})} \left| \frac{1}{|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|}-0 \right| \right) \nonumber \\&\quad = 1 - \frac{2^m}{|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|} = \frac{|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|-2^m}{|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|}. \end{aligned}$$
(6)

From [9, Equation (5)], we know that \(|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|\sim \frac{1}{\sqrt{\frac{\pi t}{2}} }\frac{(n+1)^t}{\sqrt{(n+1)^2-1}}.\) So, for mn large enough, our hypothesis translates to

$$\begin{aligned} \frac{1}{\sqrt{\frac{\pi t}{6}}} \frac{n^{t}}{\sqrt{n^{2} - 1}} < 2^{m} \le \frac{1}{\sqrt{\frac{\pi t}{6}}} \frac{(n + 1)^{t}}{\sqrt{(n + 1)^{2} - 1}}, \end{aligned}$$

Therefore,

$$\begin{aligned} 2^{m}\le & {} |\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|\\\le & {} \frac{1}{\sqrt{\frac{\pi t}{6}}} \frac{(n + 1)^{t}}{\sqrt{(n + 1)^{2} - 1}} \nonumber \\= & {} \frac{1}{\sqrt{\frac{\pi t}{6}}} \frac{n^{t}}{\sqrt{n^{2} - 1}} \frac{(n + 1)^{t}}{n^{t}} \frac{\sqrt{n^{2} - 1}}{\sqrt{(n + 1)^{2} - 1}} \nonumber \\\le & {} 2^{m} \left( 1 + \frac{1}{n} \right) ^{t} \frac{\sqrt{n^{2} - 1}}{\sqrt{n^{2} + 2 n}}. \end{aligned}$$

Using \(|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}| \le 2^{m} \left( 1 + \frac{1}{n} \right) ^{t} \frac{\sqrt{n^{2} - 1}}{\sqrt{n^{2} + 2 n}} \) to bound the numerator and using \(2^m \le |\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|\) to bound the denominator of the left-hand side of (6), we obtain

$$\begin{aligned} \frac{|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|-2^m}{|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|}\le & {} \frac{2^m \left( \left( 1 + \frac{1}{n} \right) ^{t} \frac{\sqrt{n^{2} - 1}}{\sqrt{n^{2} + 2 n}} -1 \right) }{2^m}\\< & {} \left( 1+\frac{1}{n}\right) ^t-1. \end{aligned}$$

Expanding the binomial in the right-hand side, we obtain

$$\begin{aligned} \frac{|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|-2^m}{|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|}< & {} 1+ \frac{t}{n}+\frac{{\left( {\begin{array}{c}t\\ 2\end{array}}\right) }}{n^2}+ \cdots + \frac{1}{n^t} -1. \end{aligned}$$

Since \(2^m \le \frac{1}{\sqrt{\frac{\pi t}{6}}} \frac{(n + 1)^{t}}{\sqrt{(n + 1)^{2} - 1}}\), we have \(2^{m/t} \le \frac{1}{(\frac{\pi t}{6})^{1/(2t)} }\frac{(n + 1)}{(n^2 + 2n)^{1/(2t)}}\). Thus, we get \(\frac{{\left( {\begin{array}{c}t\\ i\end{array}}\right) }}{n^i} \le \frac{{\left( {\begin{array}{c}t\\ i\end{array}}\right) }}{2^{m/t}} \frac{n+1}{n^i} \frac{1}{ (\frac{\pi t}{6})^{1/(2t)}(n^2 + 2n)^{1/(2t)}} \). Since \(t=o(m)\), then \(\frac{{\left( {\begin{array}{c}t\\ i\end{array}}\right) }}{2^{m/t}} \) approaches zero faster than any polynomial in m. In particular, for a given positive constant D, there exists \(m_0^{(i)}\) such that \(\frac{{\left( {\begin{array}{c}t\\ i\end{array}}\right) }}{n^i} \le \frac{{\left( {\begin{array}{c}t\\ i\end{array}}\right) }}{2^{m/t}} \frac{n+1}{n^i} \frac{1}{ (\frac{\pi t}{6})^{1/(2t)}(n^2 + 2n)^{1/(2t)}} < \frac{1}{tm^D}\), for all \(m \ge m_0^{(i)}\). Now, considering \(m_0=\max \{m_0^{(1)},m_0^{(2)},\ldots ,m_0^{(t)}\}\), we get \( \frac{|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|-2^m}{|\tau _{(t,n,\lfloor \frac{nt}{2}\rfloor )}|} < 1+ \frac{t}{n}+\frac{{\left( {\begin{array}{c}t\\ 2\end{array}}\right) }}{n^2}+ \ldots + \frac{1}{n^t} -1 \le \frac{1}{m^D}\), for all \(m\ge m_0\). \(\square \)

1.5 Proof of Proposition 5

Proposition 5

Let \(t, n \ge 2\), \(0 \le \eta \le n\) and \(s \le \frac{tn}{2}\). Let \((b_{t - 1}, \ldots , b_{0})\!\) be chosen uniformly at random from \(\tau _{(t, n, s)}\). Then, for any \(0 \le j \le \eta \) and for any \(0 \le i \le t - 1\),

$$\begin{aligned} \frac{|\tau _{(t-1,n,s-\eta )}|}{|\tau _{(t,n,s)}|} \le \Pr [b_{i} = j] \le \frac{|\tau _{(t-1,n,s)}|}{|\tau _{(t,n,s)}|}.\end{aligned}$$

Proof

We observe that \(b_{i}\) follows the same distribution as \(b_{0}\) for any i. This holds since for any t-tuple in \(\tau _{(t, n, s)}\), each possible index permutation of the tuple is in \(\tau _{(t, n, s)}\). Furthermore, we know that \(|\tau _{(t - 1, n, s - j)}|\) is the cardinality of the set of all t-tuples in \(\tau _{(t, n, s)}\) where \(b_{0} = j\). Hence,

$$\begin{aligned} \Pr [b_{i} = j] = \Pr [b_{0} = j] = \frac{|\tau _{(t - 1, n, s - j)}|}{|\tau _{(t, n, s)}|}. \end{aligned}$$

Applying Theorem 2 (see Appendix A.2), we get the bounds

$$\begin{aligned} \frac{|\tau _{(t - 1, n, s - \eta )}|}{|\tau _{(t, n, s)}|} \le \Pr [b_{i} = j] \le \frac{|\tau _{(t - 1, n, s)}|}{|\tau _{(t, n, s)}|}. \end{aligned}$$

\(\square \)

1.6 Technical results on the behavior of \(\tau _{(t, n, s)}\)

Theorem 4

Let \(t,n,s\ge 1\).

$$\begin{aligned} \frac{|\tau _{(t - 1,n,s)}|}{|\tau _{(t, n, s)}|} \le \frac{|\tau _{(t - 1,s,s)}|}{|\tau _{(t,s,s)}|} = \frac{t - 1}{s+t - 1}. \end{aligned}$$

Proof

The case \(t=1\) is trivial, as both numerators equal zero, so we continue the proof for \(t \ge 2\). The equality is easy to derive since \(|\tau _{(t,s,s)}|=\left( {\begin{array}{c}s+t - 1\\ t - 1\end{array}}\right) \) and \(|\tau _{(t - 1,s,s)}|= \left( {\begin{array}{c}s+t-2\\ t-2\end{array}}\right) \), and one only needs to use the formulas for the binomials. So, all we need to prove is:

$$\begin{aligned} \frac{|\tau _{(t - 1,n,s)}|}{|\tau _{(t, n, s)}|} \le \frac{t - 1}{s+t - 1}. \end{aligned}$$
(7)

Consider a bipartite graph \(G=(V=V_1\cup V_2, E)\), where the vertex set V is partitioned in two sets: \(V_1\) labeled by the elements of \(\tau _{(t - 1,n,s)}\) and \(V_2\) labeled by the elements of \(\tau _{(t, n, s)}\). Add an edge between \(A=(a_{t - 2},a_{t - 3},\ldots ,a_{0}) \in \tau _{(t - 1,n,s)}\) and \(B=(b_{t-1},b_{t-2},\ldots ,b_0)\in \tau _{(t, n, s)}\) if and only if tuple B is a “refinement” of tuple A, that is, if there exists \(0\le i \le t - 2\), such that \(a_j=b_j\), for \(0\le j\le i-1\), \(a_j=b_{j+1}\), for \(i+1\le j\le t - 2\) and \(a_i=b_{i+1}+b_i\).

We count the number of edges |E| in two ways. First, we note that the degree of each vertex in \(V_1\) is \(s+1\), as in total there are \(s+1\) ways to refine a tuple whose components add up to s, by splitting one of its components in every possible way. A way to see this is to consider the stars and bars representation of a tuple. A t-tuple with elements adding up to s with components having values in \(\{0,\ldots ,n\}\) can be represented by a string with s stars and \(t - 1\) bars, where the number of stars between bars (also counting before the first bar and after the last bar) represents the number in each of the t cells. To refine this tuple amounts to adding an extra bar to the string representation in any of the \(s+1\) possible locations, i.e., each position adjacent to any of the s stars. For example, tuple \(A=(1,0, 2)\) is represented as *||** and can be refined as |*||**,*|||**,*||*|* or *||**| corresponding to (0, 1, 0, 2), (1, 0, 0, 2), (1, 0, 1, 1), or (1, 0, 2, 0), respectively. This gives the equation \(|E|=(s+1)|V_1|= (s+1)|\tau _{(t - 1,n,s)}|\).

Second, we give an upper bound on |E| by taking into account the degree of vertices in \(V_2\). The neighbors of each vertex \(B=(b_{t-1},b_{t-2},\ldots ,b_0)\in V_2=\tau _{(t, n, s)}\) are obtained by “merges” of adjacent positions, that is: \((b_{t-1}+b_{t-2},b_{t-3},\ldots ,b_0)\), \((b_{t-1},b_{t-2}+b_{t-3},\ldots ,b_0)\), and so on until \((b_{t-1},\ldots , b_{2}, b_{1} + b_{0})\), as long as the new value \(b_{i+1}+b_i \le n\). Thus, the maximum degree of a vertex in \(V_2\) is \(t - 1\), which happens if every one of the \(t - 1\) “merges” yields a different tuple and each tuple gives a valid merge with \(b_i+b_{i+1} \le n\). Hence, \((t - 1)|V_2|\) is clearly an overestimation for |E|. From this quantity, we subtract the tuples that were overcounted due to merges in two different positions that yielded the same neighbor, which happens once each time a zero is present in one of the positions \(b_{t-2}, b_{t-3}, \ldots , b_{1}\). For example, for the tuple \(B=(1,\mathbf{0},\mathbf{0},1,\mathbf{0},3,0)\), instead of 6 distinct neighbors, we have only 3: (1, 0, 1, 0, 3, 0), (1, 0, 0, 1, 3, 0), (1, 0, 0, 1, 0, 3), since some merges give the same tuple. Indeed, there is one repeat tuple for each bold zero, so the total number of neighbors is \(6-3=3\). We count the total number of zeros appearing in positions \(b_{t-2}, b_{t-3}, \ldots , b_{1}\) of each tuple \(B\in V_2=\tau _{(t, n, s)}\), which is precisely \((t-2)\tau _{(t - 1,n,s)}\); this is so, because whenever one fixes a zero in one of the mentioned \(t-2\) positions, the rest of the t-tuple can be completed by inserting the components of one of the tuples of \(\tau _{(t - 1,n,s)}\). Thus, \(|E| \le (t - 1)|V_2| - (t-2)|\tau _{(t - 1,n,s)}| = (t - 1) |\tau _{(t, n, s)}| - (t-2)|\tau _{(t - 1,n,s)}|\). Combining with the previous equation on |E|, we obtain: \((s+1)|\tau _{(t - 1,n,s)}|= |E| \le (t - 1) |\tau _{(t, n, s)}| - (t-2)|\tau _{(t - 1,n,s)}|\). This gives \((s+t - 1)|\tau _{(t - 1,n,s)}| \le (t - 1) |\tau _{(t, n, s)}| \), which implies Eq. (7). \(\square \)

Theorem 5

Let \(t, n \ge 2\), \(2n - 1 \le s \le \frac{tn}{2}\) and tuples \((b_{t - 1}, \dots , b_{0})\), \((b'_{t - 1}, \dots , b'_{0})\) be taken arbitrarily from \(\tau _{(t, n, s)}\). Let \((a_{t - 1}, \dots , a_{0})\) be taken uniformly at random from \(\tau _{(t, n, s)}\), and \(\gamma \) be taken uniformly at random from \(\{0, \dots , t - 1\}\). Then,

  1. 1.

    \(\Pr [b_{\gamma } \le a_{\gamma }] \ge \frac{1}{s + t - 1}\).

  2. 2.

    \(\Pr [b_{\gamma } \le a_{\gamma } < b'_{\gamma } \mid b \ne b'] \ge \frac{1}{n + 1} \left( \frac{1}{t} \right) ^{t}.\)

Proof

1. Since we take \(\gamma \) at random, we get

$$\begin{aligned} \Pr [b_{\gamma } \le a_{\gamma }] = \frac{1}{t} \sum _{i = 0}^{t - 1} \Pr [b_{i} \le a_{i}]. \end{aligned}$$

Since the probability of \(a_{i} = j\) decreases as j increases, a lower bound can be obtained by using the average value of all \(b_{i}\), namely \(\frac{s}{t}\), in place of each \(b_{i}\). This can be proven with a simple exchange argument, where moving a unit of a larger \(b_{i}\) to a smaller one does not increase the total probability. Thus,

$$\begin{aligned} \Pr [b_{\gamma } \le a_{\gamma }]&\ge \frac{1}{t} \sum _{i = 0}^{t - 1} \Pr [\tfrac{s}{t} \le a_{i}] \\&= \Pr [\tfrac{s}{t} \le a_{0}] = 1 - \Pr [\tfrac{s}{t} > a_{0}]. \end{aligned}$$

Now, \(\Pr [\frac{s}{t} > a_{0}] = \sum _{j = 0}^{\lceil {\frac{s}{t}}\rceil -1} \Pr [a_{0} = j]\), and by Proposition 5,

$$\begin{aligned} \Pr [\tfrac{s}{t} > a_{0}] \le \left( \frac{s}{t} + 1 \right) \left( \frac{t - 1}{s + t - 1} \right) \le \frac{s + t}{s + t - 1}. \end{aligned}$$

Thus,

$$\begin{aligned} \Pr [b_{\gamma } \le a_{\gamma }]&\ge 1 - \Pr [\tfrac{s}{t} > a_{0}] \\&\ge 1 - \frac{s + t}{s + t - 1} \quad = \frac{1}{s + t - 1}. \end{aligned}$$

2. Since \(b \ne b'\) and both tuples’ components add up to s, there must be at least one position \(i \in \{0, \dots , t - 1\}\) where \(b_{i} < b'_{i}\), and \(\Pr [\gamma = i] = \frac{1}{t}\). In addition, the lowest probability for \(\Pr [b_{i} \le a_{i} < b'_{i}]\) occurs when the values of \(b_{i} < b'_{i}\) are as large as possible, that is, \(b_{i} = n - 1\) and \(b'_{i} = n\). This is true because \(\tau _{(t, n, s)}\) has less tuples with \(a_{i} = j\), the larger the j is. Thus, lower bounding with the smallest probability scenario, we get

$$\begin{aligned} \Pr [b_{\gamma } \le a_{\gamma } < b'_{\gamma } \mid b \ne b']&\ge \frac{1}{t} \Pr [a_{\gamma } = n - 1]. \end{aligned}$$

Now, the rest of the proof consists of finding a lower bound for \(\Pr [a_{\gamma } = n - 1]\). We know

$$\begin{aligned} \Pr [a_{\gamma } = n - 1]&= \frac{|\tau _{(t - 1, n, s - (n - 1))}|}{|\tau _{(t, n, s)}|} \\&= \frac{|\tau _{(t - 1, n, s - (n - 1))}|}{|\tau _{(t, n, s - (n - 1))}|} \frac{|\tau _{(t, n, s - (n - 1))}|}{|\tau _{(t, n, s)}|}. \end{aligned}$$

By Eq. (1), \(|\tau _{(t, n, s - (n - 1))}| = \sum _{j = 0}^{n} |\tau _{(t - 1, n, s - (n - 1) - j)}|\), and by Theorem 2, \(|\tau _{(t - 1, n, s - (n - 1) -j)}| \le |\tau _{(t - 1, n, s - (n - 1))}|\), for all \(0 \le j \le n\). Combining these facts and that

$$\begin{aligned} \sum _{j = 0}^{n} |\tau _{(t - 1, n, s - (n - 1) - j)}| \le (n + 1) |\tau _{(t - 1, n, s - (n - 1))}|, \end{aligned}$$

we get

$$\begin{aligned} \frac{|\tau _{(t - 1, n, s - (n - 1))}|}{|\tau _{(t, n, s - (n - 1))}|} \ge \frac{1}{n + 1}. \end{aligned}$$

Thus,

$$\begin{aligned} \Pr [a_{\gamma } = n - 1] \ge \frac{1}{n + 1} \frac{|\tau _{(t, n, s - (n - 1))}|}{|\tau _{(t, n, s)}|}. \end{aligned}$$

Since \(s \ge 2n - 1\), then \(s - (n - 1) \ge n\). So, \(|\tau _{(t, n, s - (n - 1))}| \ge |\tau _{(t, n, n)}| = \left( {\begin{array}{c}n + t - 1\\ t - 1\end{array}}\right) \), and since \(\tau _{(t, n, s)} \subseteq \tau _{(t, s, s)}\) we also have that \(|\tau _{(t, n, s)}| \le |\tau _{(t, s, s)}| = \left( {\begin{array}{c}s + t - 1\\ t - 1\end{array}}\right) \). Substituting both relations in the main equation above and using \(n \ge 2\), we get

$$\begin{aligned}&\Pr [a_{\gamma } = n - 1] \ge \frac{1}{n + 1} \frac{|\tau _{(t, n, n)}|}{|\tau _{(t, s, s)}|} = \frac{1}{n + 1}\frac{\left( {\begin{array}{c}n + t - 1\\ t - 1\end{array}}\right) }{\left( {\begin{array}{c}s + t - 1\\ t - 1\end{array}}\right) } \\&\quad \ge \frac{1}{n + 1} \left( \frac{n}{\frac{nt}{2} + t} \right) ^{t - 1} = \frac{1}{n + 1} \left( \frac{1}{\frac{t}{2} + \frac{t}{n}} \right) ^{t - 1} \\&\quad \ge \frac{1}{n + 1} \left( \frac{1}{t} \right) ^{t - 1}. \end{aligned}$$

Finally,

$$\begin{aligned}&\Pr [b_{\gamma } \le a_{\gamma } \le b'_{\gamma }| b\ne b'] \ge \frac{1}{t}\Pr [a_{\gamma }=n-1] \\&\quad \ge \frac{1}{n+1}\left( \frac{1}{t}\right) ^t. \end{aligned}$$

\(\square \)

Theorem 6

Let \(t \ge 2\) be given.

  1. 1.

    For m and \(s \ge 2\) such that \(|\tau _{(t, s - 1, s - 1)}| < 2^{m} \le |\tau _{(t, s, s)}|\), we have, as \(m \rightarrow \infty \),

    $$\begin{aligned} 2^{m} \le |\tau _{(t, s, s)}| = 2^{m}(1 + o(1)). \end{aligned}$$
  2. 2.

    For m and \(n \ge 2\) such that

    $$\begin{aligned} \frac{1}{\sqrt{\frac{\pi t}{6}}} \frac{n^{t}}{\sqrt{n^{2} - 1}} < 2^{m} \le \frac{1}{\sqrt{\frac{\pi t}{6}}} \frac{(n + 1)^{t}}{\sqrt{(n + 1)^{2} - 1}}, \end{aligned}$$

    we have, as \(m \rightarrow \infty \),

    $$\begin{aligned} 2^{m} \le \frac{1}{\sqrt{\frac{\pi t}{6}}} \frac{(n + 1)^{t}}{\sqrt{(n + 1)^2 - 1}} = 2^{m}(1 + o(1)). \end{aligned}$$

Proof

1. Consider the difference \(|\tau _{(t, s, s)}| - |\tau _{(t, s - 1, s - 1)}|\). Expanding into their binomial forms, we have

$$\begin{aligned}&\left( {\begin{array}{c}s + t - 1\\ t - 1\end{array}}\right) - \left( {\begin{array}{c}s - 1 + t - 1\\ t - 1\end{array}}\right) \\&\quad = \frac{(s + t - 1)!}{(t - 1)! s!} - \frac{(s + t - 2)!}{(t - 1)! (s - 1)!} = \left( {\begin{array}{c}s + t - 2\\ t - 1\end{array}}\right) \frac{t - 1}{s}. \end{aligned}$$

Hence, since t is bounded, as \(m \rightarrow \infty \) thus \(s \rightarrow \infty \), and we have

$$\begin{aligned} |\tau _{(t, s, s)}|&= \left( {\begin{array}{c}s + t - 1\\ t - 1\end{array}}\right) = \left( {\begin{array}{c}s + t - 2\\ t - 1\end{array}}\right) \left( 1 + \frac{t - 1}{s} \right) \\&< 2^{m} \left( 1 + \frac{t - 1}{s} \right) \quad = 2^{m} (1 + o(1)). \end{aligned}$$

2. Since the rightmost term is larger than the leftmost term, we can write

$$\begin{aligned} 2^{m}&\le \frac{1}{\sqrt{\frac{\pi t}{6}}} \frac{(n + 1)^{t}}{\sqrt{(n + 1)^{2} - 1}} \\&= \frac{1}{\sqrt{\frac{\pi t}{6}}} \frac{n^{t}}{\sqrt{n^{2} - 1}} \frac{(n + 1)^{t}}{n^{t}} \frac{\sqrt{n^{2} - 1}}{\sqrt{(n + 1)^{2} - 1}} \\&\le 2^{m} \left( 1 + \frac{1}{n} \right) ^{t} \frac{\sqrt{n^{2} - 1}}{\sqrt{n^{2} + n}} \quad = 2^{m}(1 + o(1)), \end{aligned}$$

with the last equality holding since t is fixed and \(n \rightarrow \infty \) as \(m \rightarrow \infty \). \(\square \)

1.7 Algorithms \(\mathcal {M'}^\mathcal {A}\) and \(\mathcal {M''}^{\mathcal {A}}\)

figure f
figure g

1.8 List of \(\textsc {MinGen}\) parameters for t

Table 7 Parameters (tns) using \(\textsc {MinGen}\) with \(30\le t \le 70\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Perin, L.P., Zambonin, G., Custódio, R. et al. Improved constant-sum encodings for hash-based signatures. J Cryptogr Eng 11, 329–351 (2021). https://doi.org/10.1007/s13389-021-00264-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13389-021-00264-9

Keywords

Navigation