Skip to main content
Log in

Binary and \(q\)-ary Tardos codes, revisited

  • Published:
Designs, Codes and Cryptography Aims and scope Submit manuscript

Abstract

The Tardos code is a much studied collusion-resistant fingerprinting code, with the special property that it has asymptotically optimal length \(m\propto c_0^2\), where \(c_0\) is the number of colluders. In this paper we give alternative security proofs for the Tardos code, working with the assumption that the strongest coalition strategy is position-independent. We employ the Bernstein inequality and Bennett inequality instead of the typically used Markov inequality. This proof technique requires fewer steps and slightly improves the tightness of the bound on the false negative error probability. We present new results on code length optimization, for both small and asymptotically large coalition sizes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. The concept of a ‘segment’ can vary wildly. It can be as simple as a video frame or as complex as a Fourier coefficient spread out over many frames. We will use the concept of segments without defining what they are. Ideally, statements about the coding layer are independent of the embedding process.

  2. The channel capacity is a fair measure of how efficient a code can theoretically be. It is an upper bound on the achievable fingerprinting rate. The fingerprinting rate of a \(q\)-ary code can be interpreted as the number of \(q\)-ary symbols needed to isolate one specific user, divided by the length of the code (total number of \(q\)-ary symbols transmitted).

  3. The concept of segments is very general, e.g. they can be combinations of coefficients in any codec.

  4. In the binary case, Tardos’ original scheme is regained by setting \(\kappa =1/2\). We then have \(\mathcal{Q }=\{0,1\}\), \({\varvec{p}}=(p_0,p_1)\) with \(p_0+p_1=1\), and \(F({\varvec{p}})=\frac{1}{\pi -4\arcsin \sqrt{\tau }}(p_0 p_1)^{-1/2}\).

  5. The generalized Beta function of a vector \({\varvec{v}}=(v_1,\cdots ,v_n)\) is defined as \(B({\varvec{v}})=\varGamma (v_1)\varGamma (v_2)\cdots \varGamma (v_n)/\varGamma (v_1+\cdots +v_n)\). For \(n=1\) this reduces to 1.

  6. In fact the expression \(\ln \varepsilon _1^{-1}\) should be replaced by \([\mathrm{Erfc}^\mathrm{inv}(2\varepsilon _1)]^2\), which is smaller. E.g. for \(\varepsilon _1=10^{-10}\) the difference is 12 %; for \(\varepsilon _1=10^{-7}\) it is 16 %.

  7. In case of an error-correcting code one counts the number of (\(q\)-ary) message symbols. In the fingerprinting case in the catch-one-colluder scenario, the communicated message contains entropy \(\log _q n\), counted in \(q\)-ary symbols.

  8. The relation \(\ln \varepsilon _1^{-1}=\ln n[1+\mathcal{O }(1/\ln n)]\) also holds more generally: e.g. for \(w=f(n\varepsilon _1)\) where \(f\) is some invertible function.

  9. In [22] it was already noted that \(\kappa >1/2\) is problematic, leading to negative terms in the \(\sum _{\varvec{\sigma }}\) for any \(q\).

  10. The notation \(0^+\) stands for an infinitesimally small positive number.

  11. For a given attack strategy, the method of [22, 23] can be used to obtain exact results.

  12. The case \(q=2,\kappa =\textstyle \frac{1}{2}\) is special. Here the \(\varGamma (-\textstyle \frac{1}{2}+\kappa [q-1]+c_0-\sigma _y)\) at \(\sigma _y=c\) has to be combined with the expression \(\{\cdots \}=0\) in order to obtain a non-divergent value \(0\cdot \varGamma (0)=\varGamma (1)=1\).

References

  1. Amiri E., Tardos G.: High rate fingerprinting codes and the fingerprinting capacity. In: Proc. 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 336–345 (2009).

  2. Bennett G.: Probability inequalities for the sum of independent random variables. J. Am. Stat. Assoc. 57(297), 33–45 (1962).

    Google Scholar 

  3. Bernstein S.N.: Theory of Probability. (1927).

  4. Blayer O., Tassa T.: Improved versions of Tardos’ fingerprinting scheme. Des. Codes Cryptogr. 48(1), 79–103 (2008).

    Google Scholar 

  5. Boesten D., Škorić B.: Asymptotic fingerprinting capacity for non-binary alphabets. In: Information Hiding 2011, Lecture Notes in Computer Science, vol. 6958, pp. 1–13. Springer, Berlin (2011).

  6. Boneh D., Shaw J.: Collusion-secure fingerprinting for digital data. IEEE Trans. Inf. Theory 44(5), 1897–1905 (1998)

    Google Scholar 

  7. Charpentier A., Xie F., Fontaine C., Furon T.: Expectation maximization decoding of Tardos probabilistic fingerprinting code. In: Media Forensics and Security, SPIE Proceedings, vol. 7254, pp. 72540 (2009).

  8. Charpentier A., Fontaine C., Furon T., Cox I.J.: An asymmetric fingerprinting scheme based on Tardos codes. In: Information Hiding, Lecture Notes in Computer Science, vol. 6958, pp. 43–58. Springer, Berlin (2011).

  9. Furon T., Guyader A., Cérou F.: On the design and optimization of Tardos probabilistic fingerprinting codes. In: Information Hiding, Lecture Notes in Computer Science, vol. 5284, pp. 341–356. Springer, Berlin (2008).

  10. Furon T., Pérez-Freire L.: Worst case attacks against binary probabilistic traitor tracing codes. CoRR, abs/0903.3480 (2009).

  11. Furon T., Pérez-Freire L., Guyader A., Cérou F.: Estimating the minimal length of Tardos code. In: Information Hiding, Lecture Notes in Computer Science, vol. 5806, pp. 176–190 (2009).

  12. He S., Wu M.: Joint coding and embedding techniques for multimedia fingerprinting. TIFS 1, 231–248 (2006).

    Google Scholar 

  13. Huang Y.W., Moulin P.: Capacity-achieving fingerprint decoding. In: IEEE Workshop on Information Forensics and Security, pp. 51–55 (2009).

  14. Knessl C., Keller J.B.: Partition asymptotics from recursion equations. Siam J. Appl. Math. 50(2), 323–338 (1990).

    Google Scholar 

  15. Laarhoven T., de Weger B.M.M.: Optimal symmetric Tardos traitor tracing schemes (2011). http://arxiv.org/abs/1107.3441.

  16. Meerwald P., Furon T.: Towards joint Tardos decoding: the ‘Don Quixote’ algorithm. In: Information Hiding. Lecture Notes in Computer Science, vol. 6958, pp. 28–42. Springer, Berlin (2011).

  17. Moulin P.: Universal fingerprinting: capacity and random-coding exponents. In: Preprint arXiv:0801.3837v2, avilable at http://arxiv.org/abs/0801.3837 (2008).

  18. Nuida K., Hagiwara M., Watanabe H., Imai H.: Optimal probabilistic fingerprinting codes using optimal finite random variables related to numerical quadrature. CoRR, abs/cs/0610036 (2006).

  19. Nuida K., Fujitsu S., Hagiwara M., Kitagawa T., Watanabe H., Ogawa K., Imai H.: An improvement of discrete Tardos fingerprinting codes. Des. Codes Cryptogr. 52(3), 339–362 (2009).

  20. Nuida K.: Short collusion-secure fingerprint codes against three pirates. In: Information Hiding, Lecture Notes in Computer Science, vol. 6387, pp. 86–102. Springer, Berlin (2010).

  21. Schaathun H.G.: On error-correcting fingerprinting codes for use with watermarking. multimed. Syst. 13(5–6), 331–344 (2008).

    Google Scholar 

  22. Simone A., Škorić B.: Asymptotically false-positive-maximizing attack on non-binary Tardos codes. In: Information Hiding, Lecture Notes in Computer Science, vol. 6958, pp. 14–27. Springer, Berlin (2011).

  23. Simone A., Škorić B.: Accusation probabilities in Tardos codes: beyond the Gaussian approximation. Des. Codes Cryptogr. 63(3), 379–412 (2012)

  24. Škorić B., Katzenbeisser S., Celik M.U.: Symmetric Tardos fingerprinting codes for arbitrary alphabet sizes. Des. Codes Cryptogr. 46(2), 137–166 (2008).

    Google Scholar 

  25. Škorić B., Vladimirova T.U., Celik M.U., Talstra J.C.: Tardos fingerprinting is better than we thought. IEEE Trans. Inf. Theory 54(8), 3663–3676 (2008)

    Google Scholar 

  26. Škorić B., Katzenbeisser S., Schaathun H.G., Celik M.U.: Tardos fingerprinting codes in the Combined Digit Model. IEEE Trans. Inf. Forensics Secur. 6(3), 906–919 (2011)

    Google Scholar 

  27. Tardos G.: Optimal probabilistic fingerprint codes. In: Proceedings of the 35th Annual ACM Symposium on Theory of Computing (STOC), pp. 116–125 (2003).

  28. Xie F., Furon T., Fontaine C.: On-off keying modulation and Tardos fingerprinting. In: Proc. 10th Workshop on Multimedia and Security (MM &Sec), ACM, pp. 101–106 (2008).

Download references

Acknowledgments

We thank Dion Boesten, Jeroen Doumen, Thijs Laarhoven, Antonino Simone, and Benne de Weger for useful discussions. We thank Wil Kortsmit for his help with numerical integrations. This research was funded by STW Sentinels (CREST project, 10518).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Boris Škorić.

Additional information

Communicated by J. D. Key.

Appendix

Appendix

1.1 Proofs

1.1.1 Proof of Lemma 11

We take the alphabet labeling \(\mathcal{Q }=\{1,2,\ldots ,q\}\) in this proof, without loss of generality. The normalization constant \(\mathcal{N }\) in (3) is defined as \(\mathcal{N }=\int _\tau ^{1-t}\!\mathrm{d}^q p\; \delta (1-\sum _\alpha p_\alpha ) {\varvec{p}}^{-1+\kappa }\). The upper bound \(1-t\) on the integration can be replaced by \(\infty \), since the delta function makes sure that only the relevant part of the integration region plays a role. We split the integration operator \(\int \mathrm{d}^q p\) into a product of \(q\) operators, and then further split each of them according to \(\int _\tau ^\infty =\int _0^\infty -\int _0^\tau \). This gives rise to a sum over \(2^q\) integration operators, which due to symmetry can be grouped according to the number of \(\int _0^\tau \) factors appearing.

$$\begin{aligned} \mathcal{N }&= \left[ \prod _{\alpha \in \mathcal{Q }}\left( \int _0^\infty \!\!\mathrm{d}p_\alpha -\int _0^\tau \!\!\mathrm{d}p_\alpha \right) \right] {\varvec{p}}^{-1+\kappa }\delta \left( 1-\sum _{\gamma \in \mathcal{Q }}p_\gamma \right) \\&= B(\kappa \mathbf{1 }_q)+ \sum _{b=1}^{q-1}{\left( \begin{array}{c}q\\ b\end{array} \right) }(-1)^b \left[ \prod _{\alpha =1}^b\int _0^\tau \!\!\mathrm{d}p_\alpha \right] \left[ \prod _{\beta =b+1}^q\int _0^\infty \!\!\mathrm{d}p_\beta \right] {\varvec{p}}^{-1+\kappa }\delta \left( 1-\sum _{\gamma \in \mathcal{Q }}p_\gamma \right) . \end{aligned}$$

The maximum value of the index \(b\) is \(q-1\), since at \(b=q\) the delta function can no longer be satisfied. We write \(p_A:=\sum _{\alpha =1}^b p_\alpha \) and \(p_\beta =(1-p_A)s_\beta \), with \(s_\beta \in [0,1]\). Provided that \(\tau <1/q\) (which in practice is always the case) we can then evaluate the \(p_\beta \) integrals,

$$\begin{aligned} \mathcal{N }-B(\kappa \mathbf{1 }_q)&= \sum _{b=1}^{q-1}{\left( \begin{array}{c}q\\ b\end{array} \right) }(-1)^b\left[ \prod _{\alpha =1}^b\int _0^\tau \!\! \mathrm{d}p_\alpha \;p_\alpha ^{-1+\kappa }\right] (1-p_A)^{-1+\kappa (q-b)}\nonumber \\&\times \int _0^\infty \!\!\mathrm{d}^{q-b}s\; {\varvec{s}}^{-1+\kappa }\delta \left( 1-\sum _{a=b+1}^q s_a\right) \nonumber \\&= \sum _{b=1}^{q-1}{\left( \begin{array}{c}q\\ b\end{array} \right) }(-1)^b B(\kappa \mathbf{1 }_{q-b}) \left[ \prod _{\alpha =1}^b\int _0^\tau \!\! \mathrm{d}p_\alpha \;p_\alpha ^{-1+\kappa }\right] (1-p_A)^{-1+\kappa (q-b)}. \nonumber \\ \end{aligned}$$
(76)

We expand in \(\tau \), using the fact that \(p_A=\mathcal{O }(\tau )\). We write \(p_\alpha =\tau u_\alpha \), with \(u_\alpha \in [0,1]\). Using the binomial expansion of \((1-p_A)^{\cdots }\) we get

$$\begin{aligned} (1-p_A)^{-1+\kappa (q-b)}=\sum _{x=0}^\infty \tau ^x {\left( \begin{array}{c}-1+\kappa [q-b]\\ x\end{array} \right) }\left( -\sum _\alpha u_\alpha \right) ^x. \end{aligned}$$
(77)

Substitution into (76) and doing a multinomial expansion of \((\sum u_\alpha )^x\) yields

$$\begin{aligned} \mathcal{N }&= B(\kappa \mathbf{1 }_q) + \sum _{b=1}^{q-1}{\left( \begin{array}{c} q\\ b\end{array} \right) }(-1)^b B(\kappa \mathbf{1 }_{q-b})\sum _{x=0}^\infty \tau ^{x+b\kappa }(-1)^x {\left( \begin{array}{c}-1+\kappa [q-b]\\ x\end{array} \right) } \zeta _{bx} \nonumber \\ \zeta _{bx}&:= \int _0^1\! \mathrm{d}^b u\; {\varvec{u}}^{-1+\kappa }\left( \sum _{\alpha =1}^b u_\alpha \right) ^x = \sum _{{\varvec{s}}:\; \sum _j s_j=x}{\left( \begin{array}{c}x\\ {\varvec{s}}\end{array} \right) }\prod _{\alpha =1}^b \frac{1}{\kappa +s_\alpha }. \end{aligned}$$
(78)

1.1.2 Proof of Lemma 12

For \(q=2\), the minimization \(\min _y\) in (48) reduces to choosing one out of two expectation values. Because of the \(0\leftrightarrow 1\) symbol symmetry these two values turn out to be identical, up to a minus sign. The negative contribution is always chosen, except where the marking assumption prohibits it. The sum over the vector \({\varvec{\sigma }}\) reduces to a sum over a scalar \(\sigma \). Also because of symbol symmetry, the contribution from \(c_0-\sigma \) equals the one from \(\sigma \). Hence the range of the \(\sigma \)-sum can be restricted to the lower half.

Without loss of generality we take \(c_0\) odd. Then

$$\begin{aligned} M&= \frac{2c_0}{\mathcal{N }}J_1 -\frac{2}{\mathcal{N }}\sum _{\sigma =1}^{(c_0-1)/2}{\left( \begin{array}{c}c_0\\ \sigma \end{array} \right) } |J_2| \\ J_1&:= \int _\tau ^{1-\tau }\!\!\mathrm{d}p\; p^{\psi }(1-p)^{c_0-1+\psi }\nonumber \\ J_2&:= \int _\tau ^{1-\tau }\!\! \mathrm{d}p\; p^{\sigma -1+\psi } (1-p)^{c_0-\sigma -1+\psi }(\sigma -c_0 p).\nonumber \end{aligned}$$
(79)

Further evaluation of the integrals yields

$$\begin{aligned} J_1&= B(1+\psi ,c_0+\psi ) -\int _0^\tau \!\mathrm{d}p\; p^\psi (1-p)^{c_0-1+\psi } -\int _0^\tau \!\mathrm{d}k\; k^{c_0-1+\psi } (1-k)^\psi \nonumber \\&= B(1+\psi ,c_0+\psi ) -\frac{\tau ^{1+\psi }}{1+\psi }[1+\mathcal{O }(c_0\tau )]\end{aligned}$$
(80)
$$\begin{aligned} J_2&= \int _\tau ^{1-\tau }\! \mathrm{d}p\; [p(1-p)]^\psi \frac{\mathrm{d}}{\mathrm{d}p}[p^\sigma (1-p)^{c_0-\sigma }] \nonumber \\&= -\psi \frac{c_0-2\sigma }{c_0+2\psi }B(\sigma +\psi ,c_0-\sigma +\psi ) -\tau ^{\sigma +\psi }\frac{\sigma }{\sigma +\psi }[1+\mathcal{O }(c_0\tau )]. \end{aligned}$$
(81)

In the last step we used integration by parts and made use of \(c_o\tau \ll 1\).

Next we look at the Beta function term in (81) and compare its magnitude to the factor \(\tau ^{\sigma +\psi }\) in the last term. We distinguish between two cases:

  • \(\sigma \ll c_0\). In this case we apply Lemma 4 and obtain

    $$\begin{aligned} \psi B(\sigma +\psi ,c_0-\sigma +\psi )=\psi \varGamma (\sigma +\psi ) c_0^{-\sigma -\psi }[1+\mathcal{O }(1/c_0)]. \end{aligned}$$
    (82)

    The condition \(\tau < |\psi |^{\frac{1}{1+\psi }}/c_0\) that we imposed in the lemma makes sure that the \(\tau ^{\sigma +\psi }\) term ‘loses’: we get \(\tau ^{\sigma +\psi }<|\psi |^{\frac{\sigma +\psi }{1+\psi }}c_0^{-\sigma -\psi }\le |\psi | c_0^{-\sigma -\psi }\).

  • \(\sigma \) of the same order as \(c_0\). We write \(\sigma =\alpha c_0\), with \(\alpha <\textstyle \frac{1}{2}\), \(\alpha \gg 1/c_0\). Applying Lemma 5 we find

    $$\begin{aligned} \psi B(\sigma +\psi ,c_0-\sigma +\psi )= \psi \frac{\sqrt{2\pi }}{\sqrt{c_0}}[\alpha (1-\alpha )]^{-\textstyle \frac{1}{2}+\psi }e^{-c_0 E(\alpha )} \left[ 1+\mathcal{O }\left( \frac{1}{c_0}\right) \right] . \end{aligned}$$
    (83)

    Again, the imposed condition on \(\tau \) causes the \(\tau ^{\sigma +\psi }\) term to ‘lose’: we have \(\tau ^{\sigma +\psi }<|\psi |c_0^{-\sigma -\psi }=|\psi |c_0^{-\psi }\exp [-c_0\alpha \ln c_0]\). Since \(\alpha \ln c_0 > E(\alpha )\) for \(\alpha \gg 1/c_0\) and large enough \(c_0\), we have an expression that is exponentially smaller than (83) in the limit \(c_0\rightarrow \infty \).

We conclude that the term containing the Beta function determines the sign of \(J_2\). Furthermore, the factor \(c_0-2\sigma \) is positive. The Beta function is also positive. Hence for sufficiently large \(c_0\) we have \(|J_2|=J_2\cdot \mathrm{sign}(-\psi )\) for all \(\sigma \in \{1,\cdots ,{\textstyle \frac{c_0-1}{2}}\}\).

Then we go back to (79): we move the \(\sum _\sigma \) into the \(J_2\)-integral and use the following summation equality,

$$\begin{aligned} \sum _{\sigma \!=\!1}^{(c_0-1)/2}{\left( \begin{array}{c}c_0\\ \sigma \end{array} \right) }p^\sigma (1\!-\!p)^{c_0\!-\!\sigma }(\sigma -c_0p)\!=\! c_0 p(1-p)^{c_0}-\frac{c_0!}{\left[ (\frac{c_0-1}{2})!\right] ^2} [p(1-p)]^{(c_0+1)/2}.\nonumber \\ \end{aligned}$$
(84)

Finally we express the integrals as incomplete Beta functions.

1.1.3 Proof of Corollary 7

In the limit \(c_0\rightarrow \infty \) we have \(\tau \downarrow 0\), so that the incomplete Beta functions become complete. We look at the first term in (58). If \(\psi <0\) then it vanishes. Using Lemma 4 we see that the Beta function scales as

$$\begin{aligned} c_0 B(1+\psi ,c_0+\psi )\sim c_0^{-\psi }. \end{aligned}$$
(85)

Hence this term disappears for \(\psi >0\) as well. In the second term we use the doubling formula for the Gamma function, \(c_0!=(2^{c_0}/\sqrt{\pi })\varGamma (c_0/2+1/2)\varGamma (c_0/2+1)\) and, again using Lemma 4,

$$\begin{aligned} B(c_0/2+\kappa ,c_0/2+\kappa )=\frac{\sqrt{\pi }}{2^{c_0+2\kappa -1}} \frac{\varGamma (c_0/2+\kappa )}{\varGamma (c_0/2+1/2+\kappa )} \sim \frac{\sqrt{\pi }}{2^{2\psi }} \frac{(c_0/2)^{-1/2}}{2^{c_0}} \end{aligned}$$
(86)

We divide by \([\varGamma (c_0/2+1/2)]^2\) and use \(\varGamma (c_0/2+1)/\varGamma (c_0/2+1/2)\sim \sqrt{c_0/2}\). Using the doubling formula again we rewrite the normalization factor \(\mathcal{N }\) as

$$\begin{aligned} \mathcal{N }(2,\textstyle \frac{1}{2}+\psi ,0)=B(\textstyle \frac{1}{2}+\psi ,\textstyle \frac{1}{2}+\psi )= 2^{-2\psi }\sqrt{\pi }\frac{\varGamma (\textstyle \frac{1}{2}+\psi )}{\varGamma (1+\psi )}. \end{aligned}$$
(87)

Combining all the ingredients yields the end result.

1.1.4 Proof of Lemma 13

We write \(\mathbb{E }_{\varvec{p}}\left[ \frac{\sigma _y-c_0 p_y}{\sqrt{p_y(1-p_y)}}{\varvec{p}}^{\varvec{\sigma }}\right] = \frac{I}{\mathcal{N }}\), where \(I\) is a \(q\)-dimensional integral, split up as in Appendix (Proof of Lemma 11),

$$\begin{aligned} I=\left[ \prod _{\alpha \in \mathcal{Q }}\left( \int _0^1\!\!\mathrm{d}p_\alpha -\int _0^\tau \!\!\mathrm{d}p_\alpha \right) \right] \delta \left( 1-\sum _{\beta \in \mathcal{Q }}p_\beta \right) {\varvec{p}}^{-1+\kappa +{\varvec{\sigma }}}\frac{\sigma _y-c_0 p_y}{\sqrt{p_y(1-p_y)}}. \end{aligned}$$
(88)

The product \(\prod _\alpha \) can be rewritten as a sum of different \(q\)-dimensional integrals; in each if these integrals a different choice is made which of the \(\alpha \) are integrated in the \((0,\tau )\) interval. We denote the set of these symbols as \(\mathcal{A }\). For brevity we will use the notation \(a=|\mathcal{A }|\), \(\mathcal{B }=\mathcal{Q }\setminus A\), \(\sigma _\mathcal{A }=\sum _{\alpha \in \mathcal{A }}\sigma _\alpha \), \(\sigma _\mathcal{B }=\sum _{\beta \in \mathcal{B }}\sigma _\beta \), \(P_\mathcal{A }=\sum _{\alpha \in \mathcal{A }}p_\alpha \), \(P_\mathcal{B }=\sum _{\beta \in \mathcal{B }}p_\beta \).

$$\begin{aligned} I&= \sum _{\mathcal{A }\subset \mathcal{Q }}(-1)^a \int _0^\tau \!\mathrm{d}^a p_\mathcal{A }\; {\varvec{p}}_\mathcal{A }^{-1+\kappa +{\varvec{\sigma }}_\mathcal{A }} \int _0^{1-P_\mathcal{A }}\!\mathrm{d}^{q-a}p_\mathcal{B }\; {\varvec{p}}_\mathcal{B }^{-1+\kappa +{\varvec{\sigma }}_\mathcal{B }} \delta \nonumber \\&\quad \times \, (1-P_\mathcal{A }-P_\mathcal{B })\frac{\sigma _y-c_0 p_y}{\sqrt{p_y(1-p_y)}}. \end{aligned}$$
(89)

(Note that \(\mathcal{A }=\mathcal{Q }\) does not occur in the sum.) We split the \(\sum _\mathcal{A }\) into two parts: one with \(y\in \mathcal{A }\) (giving rise to a contribution to \(I\) denoted as \(I_1\)) and one with \(y\in \mathcal{B }\) (giving rise to \(I_2\)). In both parts we write, for \(\beta \in \mathcal{B }\), \(p_\beta =(1-P_\mathcal{A })s_\beta \), with \(s_\beta \in (0,1)\). We have \(\delta (1-P_\mathcal{A }-P_\mathcal{B })=(1-P_\mathcal{A })^{-1}\delta (1-\sum _{\beta \in \mathcal{B }}s_\beta )\). The integrals over the ‘\(\mathcal{B }\)’ degrees of freedom can be evaluated to Beta functions.

We first derive the result for \(I_1\).

$$\begin{aligned} I_1 \!=\! \mathop {\sum }\limits _{\mathop {\mathcal{A }\subset \mathcal{Q }:}\limits _{y\in \mathcal{A }}}(\!-\!1)^a B(\kappa \mathbf{1 }_{q-a}\!+\!{\varvec{\sigma }}_\mathcal{B }) \int _0^\tau \!\mathrm{d}^a p_\mathcal{A }\; {\varvec{p}}_\mathcal{A }^{-1+\kappa +{\varvec{\sigma }}_\mathcal{A }}\frac{\sigma _y-c_0 p_y}{\sqrt{p_y(1\!-\!p_y)}} (1\!-\!P_\mathcal{A })^{-1+\kappa [q-a]\!+\!\sigma _\mathcal{B }}. \nonumber \\ \end{aligned}$$
(90)

We use binomial and multinomial expansions to write

$$\begin{aligned} \frac{1}{\sqrt{1-p_y}}&= \sum _{x=0}^\infty {\left( \begin{array}{c}-1/2\\ x\end{array} \right) }p_y^x,\nonumber \\ (1-P_\mathcal{A })^u&= \sum _{z=0}^\infty {\left( \begin{array}{c}u\\ z\end{array} \right) }(-P_\mathcal{A })^z \nonumber \\&= \sum _{z=0}^\infty {\left( \begin{array}{c}u\\ z\end{array} \right) }(-1)^z\sum _{{\varvec{s}}\in \mathbb{N }^\mathcal{A }:\; s_\mathcal{A }=z}{\left( \begin{array}{c}z\\ {\varvec{s}}\end{array} \right) } \prod _{\alpha \in \mathcal{A }}p_\alpha ^{s_\alpha }. \end{aligned}$$
(91)

Substitution into (90) yields an expression containing \(a\) independent integrals that can be evaluated analytically. Furthermore we re-arrange the \(x\) and \(z\) summations by introducing \(j:=x+z\),

$$\begin{aligned} \sum _{x=0}^\infty \sum _{z=0}^\infty f(x,z)=\sum _{j=0}^\infty \sum _{z=0}^j f(j-z,z). \end{aligned}$$
(92)

Thus we obtain

$$\begin{aligned} I_1&= \sum _{j=0}^\infty \!\sum _{\mathcal{A }\subset \mathcal{Q }:\; y\in \mathcal{A }}\!\!\!\!(-1)^{j+a} \tau ^{j+\kappa a+\sigma _\mathcal{A }-\textstyle \frac{1}{2}}B(\kappa \mathbf{1 }_{q-a}+{\varvec{\sigma }}_\mathcal{B }) \sum _{z=0}^j {\left( \begin{array}{c}-\textstyle \frac{1}{2}\\ j-z\end{array} \right) } \nonumber \\&{\left( \begin{array}{c}-1+\kappa [q-a]+\sigma _\mathcal{B }\\ z\end{array} \right) }\sum _{{\varvec{s}}\in \mathbb{N }^\mathcal{A }:\; s_\mathcal{A }=z}{\left( \begin{array}{c}z\\ {\varvec{s}}\end{array} \right) } \left[ \prod _{\alpha \in \mathcal{A }\setminus y}\frac{1}{\kappa +\sigma _\alpha +s_\alpha }\right] \nonumber \\&\left( \frac{\sigma _y}{\kappa +\sigma _y+s_y+j-z-\textstyle \frac{1}{2}}-\frac{c_0\tau }{\kappa +\sigma _y+s_y+j-z+\textstyle \frac{1}{2}}\right) . \end{aligned}$$
(93)

We use the constraint \(s_\mathcal{A }=z\) to eliminate the \(z\)-sum,

$$\begin{aligned} I_1&= \sum _{j=0}^\infty \!\sum _{\mathcal{A }\subset \mathcal{Q }:\; y\in \mathcal{A }} \!\!\!\!(-1)^{j+a} \tau ^{j+\kappa a+\sigma _\mathcal{A }-\textstyle \frac{1}{2}}B(\kappa \mathbf{1 }_{q-a}+{\varvec{\sigma }}_\mathcal{B }) \sum _{{\varvec{s}}\in \mathbb{N }^\mathcal{A }:\; s_\mathcal{A }\le j} {\left( \begin{array}{c}-\textstyle \frac{1}{2}\\ j-s_\mathcal{A }\end{array} \right) } \nonumber \\&\frac{\varGamma (\kappa [q-a]+\sigma _\mathcal{B })}{\varGamma (\kappa [q-a]+\sigma _\mathcal{B }-s_\mathcal{A })} \frac{1}{\prod _\alpha s_\alpha !} \left[ \prod _{\alpha \in \mathcal{A }\setminus \{y\}}\frac{1}{\kappa +\sigma _\alpha +s_\alpha }\right] \nonumber \\&\left( \frac{\sigma _y}{\kappa +\sigma _y+s_y+j-s_\mathcal{A }-\textstyle \frac{1}{2}}-\frac{c_0\tau }{\kappa +\sigma _y+s_y+j-s_\mathcal{A }+\textstyle \frac{1}{2}}\right) \end{aligned}$$
(94)

Next we do a similar derivation for \(I_2\). Integration over the ‘\(\mathcal{B }\)’ degrees of freedom gives

$$\begin{aligned} I_2&= \sum _{x=0}^\infty {\left( \begin{array}{c}-\textstyle \frac{1}{2}\\ x\end{array} \right) } \sum _{\mathcal{A }\subseteq \mathcal{Q }\setminus \{y\}}(-1)^{x+a} \int _0^\tau \!\mathrm{d}^a p_\mathcal{A }\; {\varvec{p}}_\mathcal{A }^{-1+\kappa +{\varvec{\sigma }}_\mathcal{A }} (1-P_\mathcal{A })^{-{\textstyle \frac{3}{2}}+\kappa [q-a]+\sigma _\mathcal{B }+x} \nonumber \\&\left\{ \sigma _y B\left( \kappa \mathbf{1 }+{\varvec{\sigma }}_\mathcal{B }+{\varvec{e}}_y\left[ x-\textstyle \frac{1}{2}\right] \right) -c_0(1-P_\mathcal{A })B\left( \kappa \mathbf{1 }+{\varvec{\sigma }}_\mathcal{B }+{\varvec{e}}_y \left[ x+\textstyle \frac{1}{2}\right] \right) \right\} . \end{aligned}$$
(95)

Expansion of \((1-P_\mathcal{A })^{\cdots }\) as in (91) folowed by \(\int \mathrm{d}^a p_\mathcal{A }\) integration yields

$$\begin{aligned} I_2&= \sum _{z=0}^\infty \sum _{\mathcal{A }\subseteq \mathcal{Q }\setminus \{y\}}(-1)^{z+a}\tau ^{z+\kappa a+\sigma _\mathcal{A }} \left[ \prod _{\beta \in \mathcal{B }\setminus \{y\}}\varGamma (\kappa +\sigma _\beta )\right] \nonumber \\&\left[ \sum _{x=0}^\infty {\left( \begin{array}{c}-\textstyle \frac{1}{2}\\ x\end{array} \right) }(-1)^x\left\{ \sigma _y \xi _x -c_0 \xi _{x+1} \right\} \right] \sum _{{\varvec{s}}\in \mathbb{N }^\mathcal{A }:\; s_\mathcal{A }=z}{\left( \begin{array}{c}z\\ {\varvec{s}}\end{array} \right) }\frac{1}{z!}\left[ \prod _{\alpha \in \mathcal{A }} \frac{1}{\kappa +\sigma _\alpha +s_\alpha }\right] \nonumber \\ \xi _x&= \frac{\varGamma \left( \kappa +\sigma _y+x-\textstyle \frac{1}{2}\right) }{\varGamma \left( \kappa [q-a]+\sigma _\mathcal{B }-z+x-\textstyle \frac{1}{2}\right) }. \end{aligned}$$
(96)

Finally we use the following identity to get rid of the \(x\)-sum,

$$\begin{aligned} \sum _{x=0}^\infty {\left( \begin{array}{c}-\textstyle \frac{1}{2}\\ x\end{array} \right) }(-1)^x \frac{\varGamma (u+x)}{\varGamma \left( w+\textstyle \frac{1}{2}+x\right) }= \frac{\varGamma (u)\varGamma (w-u)}{\varGamma \left( w-u+\textstyle \frac{1}{2}\right) \varGamma (w)}, \end{aligned}$$
(97)

with \(u=\kappa +\sigma _y-\textstyle \frac{1}{2}\), \(w=\kappa [q-a]+S_B-z-1\).

1.1.5 Proof of Lemma 14

For \(\tau =0\), the \((q-1)\)-dimensional integral \(\mathbb{E }_p\) occurring in (48) can be evaluated exactly, yielding generalized Beta functions. These can be rearranged [22, 24] to yield \(M_0 =\)

$$\begin{aligned} \mathbb{E }_{\varvec{\sigma }}^{(0)} \!\!\left[ \min _{y:\;\sigma _y\ge 1} \frac{\varGamma \left( \kappa +\sigma _y-\textstyle \frac{1}{2}\right) \varGamma \left( \kappa [q-1] +c_0-\sigma _y-\textstyle \frac{1}{2}\right) }{\varGamma (\kappa +\sigma _y)\varGamma (\kappa [q-1] +c_0-\sigma _y)} \!\! \left\{ c_0\left( \textstyle \frac{1}{2}-\kappa \right) +\sigma _y(\kappa q-1) \right\} \right] .\nonumber \\ \end{aligned}$$
(98)

All the Gamma functions are positive, since \(\sigma _y\ge 1\) causes all their arguments to be nonnegative. Furthermore, the condition \(\kappa \in [{\textstyle \frac{1}{2(q-1)}},\textstyle \frac{1}{2}]\) makes sure that the expression \(\{\cdots \}\) is positiveFootnote 12 at \(\sigma _y\le c_0-1\) and nonnegative at \(\sigma _y=c_0\). That proves that \(M_0>0\) independent of \(c_0\).

It was shown in [24] that (98) is of order \(\mathcal{O }(1)\) in the limit \(c_0\rightarrow \infty \). Finally, a series expansion of (98), of which we omit the details, shows that the correction to the leading order is \(+\mathcal{O }({\textstyle \frac{1}{c_0}})\). \(\square \)

1.1.6 Proof of Theorem 4

The case \(q\ge 3\)

We start from Lemma 13. The \(M_0\) part follows from setting \(z=0\), \(\mathcal{A }=\emptyset \) in \(I_2\). We use the notation \(Y\) for the symbol choice \(y\) that achieves the minimum in (48). Note that \(Y\) is a function of \({\varvec{\sigma }}\). For the sub-leading term, there are several competitors (1 to 4 listed below). Furthermore, there is a positive \(\mathcal{O }(1/c_0)\) term from \(M_0/M_0^\infty =1+\mathcal{O }(1/c_0)\), which has to be taken into account as well.

  1. 1.

    Set \(j=0\), \(\mathcal{A }=\{Y\}\) in \(I_1\) and take the \(\tau =0\) part of \(\mathcal{N }\). This yields the following contribution to \(M\):

    $$\begin{aligned} \triangle M_1 =\frac{-1}{B(\kappa \mathbf{1 }_q)}\sum _{\varvec{\sigma }}{\left( \begin{array}{c}{l} c_0\\ {\varvec{\sigma }}\end{array} \right) } \tau ^{\kappa +\sigma _Y-\textstyle \frac{1}{2}} B(\kappa +{\varvec{\sigma }}_{Q\setminus \{Y\}})\frac{\sigma _Y}{\kappa +\sigma _Y-1/2}. \end{aligned}$$
    (99)

    We have to determine if it is possible for \(\sigma _Y=1\) to occur, since this gives the lowest power of \(\tau \). Close inspection of the function \(W\) (52) reveals that asymptotically \(W(c_0-1)-W(1)\rightarrow \frac{\sqrt{c}_0}{\sqrt{1-1/c_0}}(\kappa q-1)(1-2/c_0)\). For \(\kappa > \frac{1}{q}\) we have \(W(1) < W(c_0-1)\), which means that \({\varvec{\sigma }}\)-vectors of the form \((1,c_0-1,0,\cdots ,0)\) will indeed lead to the selection of the symbol that occurs once, i.e. \(\sigma _Y=1\). Furthermore, \(W(c_0-2)<W(1)\), which means that the above form of \({\varvec{\sigma }}\) is the only one that can yield \(\sigma _Y=1\). Substitution of this form into (99) gives

    $$\begin{aligned} \triangle M_1 = \frac{-\tau ^{\kappa +\textstyle \frac{1}{2}}}{B(\kappa \mathbf{1 }_q)}\sum _{\varvec{\sigma }}\sum _{y\in \mathcal{Q }}\delta _{\sigma _y,1}\sum _{\alpha \in \mathcal{Q }\setminus y}\delta _{\sigma _\alpha ,c_0-1} {\left( \begin{array}{c}{l} c_0\\ {\varvec{\sigma }}\end{array} \right) } \frac{[\varGamma (\kappa )]^{q-2}\varGamma (\kappa +c_0-1)}{\varGamma (\kappa [q-1]+c_0-1)} \frac{1}{\kappa +\textstyle \frac{1}{2}}\nonumber \\ \end{aligned}$$
    (100)

    which reduces to the expression in the first row of the table. For \(\kappa <1/q\) it does not occur that \(\sigma _Y=1\), and \(\triangle M_1\) (99) does not contain dominant contributions.

  2. 2.

    Take \(M_0\) and the leading order correction to \(\mathcal{N }(q,\kappa ,0)\). From Corollary 6 we get

    $$\begin{aligned} \triangle M_2= M_0\frac{q}{\kappa B(\kappa ,\kappa q-\kappa )} \tau ^\kappa . \end{aligned}$$
    (101)

    Note that for \(\kappa >1/q\) we have \(\triangle M_2/\triangle M_1=\mathcal{O }(1/c_0\sqrt{\tau })\ll 1\).

  3. 3.

    Take \(\mathcal{N }(q,\kappa ,0)\) and set \(\mathcal{A }=\emptyset \), \(z=1\) in \(I_2\). This yields a contribution

    $$\begin{aligned} \triangle M_3&= c_0\tau \mathbb{E }_{\varvec{\sigma }}^{(0)}\left[ (\kappa q+c_0-1) \frac{\varGamma \left( \kappa +\sigma _Y-\textstyle \frac{1}{2}\right) \varGamma \left( \kappa [q-1]+c_0-\sigma _Y-{\textstyle \frac{3}{2}}\right) }{\varGamma (\kappa +\sigma _Y)\varGamma (\kappa [q-1]+c_0-\sigma _Y-1)}\right. \nonumber \\&\quad \left. \left\{ \frac{\sigma _Y}{c_0}(2-\kappa q)-\left( \textstyle \frac{1}{2}-\kappa \right) \right\} \right] , \end{aligned}$$
    (102)

    with \(\mathbb{E }_{\varvec{\sigma }}^{(0)}\) as defined in (41). Using Lemma 4 we see that \(\triangle M_3\) is of order \(\mathcal{O }(c_0\tau )\) when \(\sigma _Y=\mathcal{O }(c_0)\) and even smaller if \(\sigma _Y=\mathcal{O }(1)\). Thus for \(\kappa >1/q\) we have \(\triangle M_3/\triangle M_1=o(\tau ^{1/2-\kappa })\ll 1\). In the case \(\sigma _Y=\mathcal{O }(c_0)\) we can write

    $$\begin{aligned} \triangle M_3\rightarrow c_0\tau \mathbb{E }_{\varvec{\sigma }}^{(0)}\left[ \frac{\frac{\sigma _Y}{c_0}(2-\kappa q)-\left( \textstyle \frac{1}{2}-\kappa \right) }{\sqrt{\frac{\sigma _Y}{c_0}\left( 1-\frac{\sigma _Y}{c_0}\right) }} \right] . \end{aligned}$$
    (103)
  4. 4.

    Take \(\mathcal{N }(q,\kappa ,0)\) and set \(\mathcal{A }=\{\gamma \}\) (with \(\gamma \ne Y\)), \(\sigma _\gamma =0\), \(z=0\) in \(I_2\). The contribution to \(M\) is

    $$\begin{aligned} \triangle M_4&= \frac{-\tau ^\kappa }{\kappa }\mathbb{E }_{\varvec{\sigma }}^{(0),q\rightarrow q-1}\left[ \frac{\varGamma (\kappa +\sigma _Y-\textstyle \frac{1}{2})\varGamma (\kappa [q-2]+c_0-\sigma _Y-{\textstyle \frac{1}{2}})}{\varGamma (\kappa +\sigma _Y)\varGamma (\kappa [q-2]+c_0-\sigma _Y)}\right. \nonumber \\&\quad \quad \left. \left\{ c_0\left( \textstyle \frac{1}{2}-\kappa \right) +\sigma _Y(\kappa [q-1]-1) \right\} \right] \nonumber \\&= \frac{-\tau ^\kappa }{\kappa }M_0^{q\rightarrow q-1} \end{aligned}$$
    (104)

    where the “\(q\rightarrow q-1\)” denotes that the alphabet has effectively been reduced by the exclusion of the symbol \(\gamma \). The largest possible contributions occur when \(\sigma _Y=1\) (case \(\kappa >1/q\)); the corresponding form of \({\varvec{\sigma }}=(1,c_0-1,0,\cdots ,0)\) happens with probability \(\mathcal{O }(c_0^{-\kappa [q-1]})\). Again using Lemma 4 we conclude that \(\triangle M_4=\mathcal{O }(\tau ^\kappa c_0^{1/2-\kappa [q-1]})\). We have \(\triangle M_4/\triangle M_1=\mathcal{O }(c_0^{-(\kappa q-1)-(1/2-\kappa )}/(c_0\sqrt{\tau }))\). Finally, with \(\kappa q>1\),\(\kappa <\textstyle \frac{1}{2}\) and \(c_0\sqrt{\tau }\rightarrow \infty \) we find \(\triangle M_4\ll \triangle M_1\). In the case \(\kappa <1/q\), we have \(\sigma _Y=\mathcal{O }(c_0)\), yielding \(\triangle M_4=\mathcal{O }(\tau ^\kappa )\).

For \(\kappa >1/q\), the \(\triangle M_1\) is of larger order than \(\triangle M_2\), \(\triangle M_3\), \(\triangle M_4\). Furthermore, \(\triangle M_1\) is also of larger order than the \(\mathcal{O }(1/c_0)\) correction. This is seen as follows: \(c_0\tau ^{\kappa +1/2}/[1/c_0]=(c_0\sqrt{\tau })(c_0\tau ^\kappa )\); use \(\kappa <1/2\) and \(c_0\sqrt{\tau }\rightarrow \infty \).

For \(\kappa <1/q\), the contestants are \(\triangle M_3=\mathcal{O }(c_0\tau )\) (\(\triangle M_3>0\)) and \(\triangle M_2+\triangle M_4=\mathcal{O }(\tau ^\kappa )\). Their quotient is \(\tau ^\kappa /\triangle M_3\sim c_0^{-1}\tau ^{\kappa -1}\sim c_0^{-1+\nu (1-\kappa )}\).

  • For \(\nu <1/(1-\kappa )\), the \(c_0\tau \) wins. Note that \(c_0\tau \) dominates the \(1/c_0\) correction, since \(c_0\tau /[1/c_0]=(c_0\sqrt{\tau })^2\) with \(c_0\sqrt{\tau }\rightarrow \infty \).

  • For \(\nu >1/(1-\kappa )\), the \(\tau ^\kappa \) wins. Note that \(\tau ^\kappa \) dominates the \(1/c_0\) correction, since we have \(\tau ^\kappa /(1/c_0)\sim c_0^{1-\nu \kappa }\) with \(\nu <1/\kappa \).

The case \(q=2\)

We start from Lemma 12. The \(\tau ^\kappa \) term in the last row of the table comes from taking all the \(p\)-integrals with \(\tau =0\) and then dividing by \(\mathcal{N }\) as given in Corollary 6.

All the other leading order corrections to \(M_0\) are obtained from the Marking Assumption term (the first term) and from the \(\sigma =1\) term in the summation; in both cases the correction can be computed as an integration \(\int _0^\tau \!\mathrm{d}p(\cdots )\), and the leading order correction is proportional to \(\int _0^\tau \!\mathrm{d}p\; p^{-1/2+\kappa }=\tau ^{1/2+\kappa }/(1/2+\kappa )\). It turns out that for \(\sigma =1\) the sign of the integral is \(\mathrm{sgn}(1/2-\kappa -0^+)\). For \(\kappa \ge \textstyle \frac{1}{2}\) the leading order corrections add up, yielding \(\mathcal{O }(c_0\tau ^{1/2+\kappa })\). However, for \(\kappa <\textstyle \frac{1}{2}\) the leading order corrections cancel each other, and the next terms (of relative order \(c_0\tau \ll 1\)) become dominant.

1.1.7 Proof of Theorem 5

We give the proof case by case. We refer to the table in Theorem 4 as ‘the table’.

  • \(q\ge 3, \kappa \in ({\textstyle \frac{1}{q}},{\textstyle \frac{1}{2}}),\nu \in (1,2):\)

  • From line 1 of the table we get \(\delta =\mathcal{O }(c_0\tau ^{1/2+\kappa })+\mathcal{O }(\frac{1}{c_0\sqrt{\tau }}) =\mathcal{O }(c_0^{1-\nu (1/2+\kappa )})+\mathcal{O }(c_0^{\nu /2-1})\). The contributions are of the same order if we set \(\nu =2/(1+\kappa )\).

  • \(q\ge 3,\kappa <\frac{1}{q},\nu \in (1,\frac{1}{1-\kappa })\), assuming \(\omega >0:\)

  • Line 2 of the table gives \(\delta =\mathcal{O }(c_0\tau )+\mathcal{O }(\frac{1}{c_0\sqrt{\tau }})=\mathcal{O }(c_0^{1-\nu })+\mathcal{O }(c_0^{\nu /2-1})\). The contributions are of the same order if we set \(\nu =4/3\). However, \(\kappa \) may be so small that \(4/3\) lies outside the given range \(\nu \in (1,\frac{1}{1-\kappa })\). In that case, the \(\mathcal{O }(c_0^{1-\nu })\) wins and we want to make \(\nu \) as large as possible.

  • \(q\ge 3,\kappa <\frac{1}{q},\nu \in \left( 1,\frac{1}{1-\kappa }\right) \), assuming \(\omega <0:\)

  • We have \(\delta =-\mathcal{O }(c_0\tau )+\mathcal{O }\left( \frac{1}{c_0\sqrt{\tau }}\right) =-\mathcal{O }\left( c_0^{1-\nu }\right) +\mathcal{O }\left( c_0^{\nu /2-1}\right) \). We want the \(c_o\tau \) to win by as large a margin as possible. This is achieved by setting \(\nu =1+0^+\).

  • \(q\ge 3,\kappa <\frac{1}{q},\kappa <{\textstyle \frac{1}{4}},\nu \in \left( \frac{1}{1-\kappa },\frac{2}{1+2\kappa }\right] , \omega <0:\)

  • Line 3 of the table gives \(\delta =-\mathcal{O }(\tau ^\kappa )+\mathcal{O }(\frac{1}{c_0\sqrt{\tau }})=-\mathcal{O }(c_0^{-\nu \kappa })+ \mathcal{O }(c_0^{\nu /2-1})\). By setting \(\nu \) as small as possible, \(\nu _*=\frac{1}{1-\kappa }+0^+\), we let the negative term win as much as possible. This can be seen by comparing the powers: \(-\nu _*\kappa -(\nu _*/2-1)=(\textstyle \frac{1}{2}-2\kappa )/(1-\kappa )-0^+\), which is positive by virtue of \(\kappa <{\textstyle \frac{1}{4}}\).

  • \(q\ge 3,\kappa <\frac{1}{q},\kappa <{\textstyle \frac{1}{4}},\nu \in (\frac{1}{1-\kappa },\frac{2}{1+2\kappa }], \omega >0:\)

  • Now we have \(\delta =+\mathcal{O }(c_0^{-\nu \kappa })+\mathcal{O }(c_0^{\nu /2-1})\). The two terms are of equal order if we set \(\nu =\frac{2}{1+2\kappa }\).

  • \(q\ge 3,\kappa <\frac{1}{q},\nu \in \left( \max \{\frac{1}{1-\kappa }, \frac{2}{1+2\kappa }\},2\right) :\)

  • Now we have \(\delta =\pm \mathcal{O }(c_0^{-\nu \kappa })+\mathcal{O }(c_0^{\nu /2-1})\), but the second term always wins. The optimum is to set \(\nu \) as small as possible.

  • \(q=2,\kappa \in [{\textstyle \frac{1}{2}},1),\nu \in (1,\frac{4}{1+2\kappa }):\)

  • Line 4 of the table gives \(\delta =\mathcal{O }(c_0\tau ^{1/2+\kappa })+\mathcal{O }(\frac{1}{c_0\sqrt{\tau }})= \mathcal{O }(c_0^{1-\nu (1/2+\kappa )})+\mathcal{O }(c_0^{\nu /2-1})\). The balance lies at \(\nu =\frac{2}{1+\kappa }\), which is inside \((1,\frac{4}{1+2\kappa })\).

  • \(q=2,\kappa \in [{\textstyle \frac{1}{2}},1),\nu \in (\frac{4}{1+2\kappa },2):\)

  • Line 5 of the table gives \(\delta =\mathcal{O }(c_0^{-1})+\mathcal{O }(\frac{1}{c_0\sqrt{\tau }})\). The \(c_0^{-1}\) always loses. The optimum is to set \(\nu \) as small as possible.

  • \(q=2,\kappa <{\textstyle \frac{1}{2}},\nu \in (1,\frac{2}{1+2\kappa }):\)

  • Line 6 of the table gives \(\delta =-\mathcal{O }(\tau ^\kappa )+\mathcal{O }(\frac{1}{c_0\sqrt{\tau }}) =-\mathcal{O }(c_0^{-\nu \kappa })+\mathcal{O }(c_0^{\nu /2-1})\). Let \(\kappa =\textstyle \frac{1}{2}-\psi \) and \(\nu =1+\varepsilon \). The negative term wins as long as \(\varepsilon <\psi /(1-\psi )\).

  • \(q=2,\kappa <{\textstyle \frac{1}{2}},\nu \in (\frac{2}{1+2\kappa },2):\)

  • Again we have \(\delta =-\mathcal{O }(c_0^{-\nu \kappa })+\mathcal{O }(c_0^{\nu /2-1})\), but now the positive term always wins. The optimum is to set \(\nu \) as small as possible.

1.1.8 Proof of Theorem 6

Lemma 12 (from which the correction terms were derived) is applicable only for \(\tau <\varepsilon /c_0\). This translates to \(Tc_0^{-\rho }<\varepsilon \), i.e. \(c_0>(T/\varepsilon )^{1/\rho }\). This explains the condition on \(c_0\). Applying a Taylor expansion to Corollary 7 for \(\psi =-\varepsilon \) gives

$$\begin{aligned} \frac{\varGamma (1-\varepsilon )}{\varGamma (\textstyle \frac{1}{2}-\varepsilon )}=\frac{1}{\sqrt{\pi }}[1-\varepsilon \cdot 2\ln 2+\mathcal{O }(\varepsilon ^2)], \end{aligned}$$
(105)

which leads to the factor after \(\frac{\pi ^2}{2}\) in (64). In Theorem 6, the quotient of the positive correction term divided by the negative one is \(\frac{1}{6T}c_0^{-\varepsilon +\rho (1-\varepsilon )}\). The condition \(\rho <\varepsilon /(1-\varepsilon )\) makes sure that the negative correction term dominates for sufficiently large \(c_0\). The above mentioned quotient is smaller than 1 for \(c_0>(1/6T)^{1/(\varepsilon -\rho +\varepsilon \rho )}\).

1.1.9 Proof of Theorem 7

\(\tau \) is given. We define \(r=c_0\sqrt{\tau }(MA-B)/\eta \), with \(r\ge 0\). Instead of the variables \((A,B)\) we consider \((A,r)\) as our independent variables of interest. We are allowed to apply Corollary 4, since the condition \(MA-B>0\) is satisfied by the \(A,B\) solution given in Theorem 7. Using Corollary 4 and \(V^2<q\) (Lemma 1), we find

$$\begin{aligned} A\le f_2(\tau ,r) \implies P_\mathrm{FN}\le \varepsilon _2. \end{aligned}$$
(106)

Rewriting (22) in terms of \(A,r\) is a bit more laborious. It results in a quadratic inequality for \(A\),

$$\begin{aligned} 0\le \frac{M^2}{2}A^2-A\left\{ 1+\frac{M}{c_0\sqrt{\tau }}\left( \frac{1}{3}+\eta r\right) \right\} +\frac{\eta r}{c_0^2 \tau }\left( \frac{1}{3}+\frac{1}{2} \eta r\right) \quad \implies \quad P_\mathrm{FP}\le \varepsilon _1. \end{aligned}$$
(107)

The quadratic function in \(A\) has two positive roots. We concentrate on the largest root,

$$\begin{aligned} A\ge f_1(\tau ,r) \quad \implies \quad P_\mathrm{FP}\le \varepsilon _1. \end{aligned}$$
(108)

We have \(f_1(\tau ,0)>0\), \(f_2(\tau ,0)=0\), \(\frac{\partial f_1}{\partial r}(\tau ,r\rightarrow \infty )=2\eta /(Mc_0\sqrt{\tau })\) and \(\frac{\partial f_2}{\partial r}(\tau ,r\rightarrow \infty )=\eta /(eq c_0 \tau )\). Since it was given that \(\sqrt{\tau }<\frac{M}{2eq}\), it holds at large enough \(r\) that \(\partial f_2/\partial r>\partial f_1/\partial r\). Hence there exists a point \(r_*(\tau )\) where \(f_1(\tau ,r_*)=f_2(\tau ,r_*)\). See Fig. 3. The value \(r_*(\tau )\) is the smallest value of \(r\) for which both conditions \(A\ge f_1(\tau ,r)\) and \(A\le f_2(\tau ,r)\) can hold simultaneously.

1.1.10 Proof of Theorem 8

We have the following derivatives,

$$\begin{aligned} \frac{\partial f_2}{\partial \tau }= -\frac{1}{\tau }f_2 <0 \quad \quad \frac{\partial f_2}{\partial r}= \frac{r+1}{r^2}f_2 >0 \quad \quad \frac{\partial f_1}{\partial r}= \frac{\eta }{Mc_0\sqrt{\tau }}\left( 1+\frac{1}{2\sqrt{D}}\right) >0. \end{aligned}$$
(109)

The implicit function theorem gives us

$$\begin{aligned} \frac{\mathrm{d}r_*(\tau )}{\mathrm{d}\tau }=-\left. \frac{\partial (f_1-f_2)/\partial \tau }{\partial (f_1-f_2)/\partial r}\right| _{r=r_*(\tau )}. \end{aligned}$$
(110)

Minimizaton of \(A=f_2(r_*)\) with respect to \(\tau \) can be written as

$$\begin{aligned} 0=\frac{\mathrm{d}f_2(\tau ,r_*(\tau ))}{\mathrm{d}\tau }= \frac{\partial f_2}{\partial \tau }(\tau ,r_*)+\frac{\partial f_2}{\partial r}(\tau ,r_*)\frac{\mathrm{d}r_*(\tau )}{\mathrm{d}\tau }. \end{aligned}$$
(111)

Substitution of (110) and the \(\partial f_2/\partial \tau \) and \(\partial f_2/\partial r\) from (109) into (111) yields

$$\begin{aligned} 0=-\frac{1}{\tau }f_2(\tau ,r_*)- \frac{r_*+1}{r_*^2}f_2(\tau ,r_*) \left. \frac{\partial (f_1-f_2)/\partial \tau }{\partial (f_1-f_2)/\partial r}\right| _{r=r_*(\tau )}. \end{aligned}$$
(112)

Multiplication by \(\tau /f_2\) and some slight rearranging yields the end result.

1.1.11 Proof of Theorem 9

The idea is to pick a value \(\hat{r}\) (75) slightly larger than \(r_*\), and set \(A\) to some value \(\hat{A}\in [f_1(\tau ,\hat{r}),f_2(\tau ,\hat{r})]\). The condition (70) is necessary so that we can use Theorem 7.

We introduce the abbreviation \(y=\ln [\ln (1/c_0 \tau ) \frac{2eq}{M^2\eta }]/\ln (1/c_0 \tau )\). The condition (71) ensures that \(y<1\). We then have

$$\begin{aligned} f_2(\tau ,\hat{r})=\frac{2}{M^2}\cdot \frac{1}{1-y}>\frac{2}{M^2}(1+y). \end{aligned}$$
(113)

Condition (71) also ensures that \(\hat{r}<1\). For \(f_1\) we get

$$\begin{aligned} f_1(\tau ,\hat{r})\le \frac{2}{M^2}\left[ 1+\frac{M}{3c_0\sqrt{\tau }}(1+3\eta \hat{r}) \right] <\frac{2}{M^2}\left[ 1+\frac{M}{3c_0\sqrt{\tau }}(1+3\eta )\right] , \end{aligned}$$
(114)

where the first inequality follows from neglecting a part of the determinant \(D\) (67) and the second inequality from \(\hat{r}<1\). Condition (72) ensures that the lower bound on \(f_2(\tau ,\hat{r})\), i.e. the last expression in (113), lies higher than the upper bound on \(f_1(\tau ,\hat{r})\), so that indeed we have \(f_2(\tau ,\hat{r})>f_1(\tau ,\hat{r})\) as planned. Furthermore, the first inequality in (114), together with the definition of \(\hat{A}\) in (73) tells us that indeed \(f_1(\tau ,\hat{r})<\hat{A}<f_2(\tau ,\hat{r})\). The choice for \(B\) follows by setting \(\hat{B}=M\hat{A}-\frac{\eta \hat{r}}{c_0\sqrt{\tau }}\) just as in Theorem 7.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Škorić, B., Oosterwijk, JJ. Binary and \(q\)-ary Tardos codes, revisited. Des. Codes Cryptogr. 74, 75–111 (2015). https://doi.org/10.1007/s10623-013-9842-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10623-013-9842-3

Keywords

Mathematics Subject Classification

Navigation