Skip to main content

On the Distribution of Scrambled \((0,m,s)-\)Nets Over Unanchored Boxes

  • Conference paper
  • First Online:
Monte Carlo and Quasi-Monte Carlo Methods (MCQMC 2020)

Abstract

We introduce a new quality measure to assess randomized low-discrepancy point sets of finite size n. This new quality measure, which we call “pairwise sampling dependence index”, is based on the concept of negative dependence. A negative value for this index implies that the corresponding point set integrates the indicator function of any unanchored box with smaller variance than the Monte Carlo method. We show that scrambled \((0,m,s)-\)nets have a negative pairwise sampling dependence index. We also illustrate through an example that randomizing via a digital shift instead of scrambling may yield a positive pairwise sampling dependence index.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Dick, J., Pillichshammer, F.: Digital Nets and Sequences: Discrepancy Theory and Quasi-Monte Carlo Integration. Cambridge University Press, UK (2010)

    Google Scholar 

  2. Doerr, B., Gnewuch, M.: On Negative Dependence Properties of Latin Hypercube Samples and Scrambled Nets (2021). Preprint on arXiv.org

  3. Gnewuch, M., Wnuk, M., Hebbinghaus, N.: On negatively dependent sampling schemes, variance reduction, and probabilistic upper discrepancy bounds. In: Bylik, D., Dick, J., Pillichshammer, F. (eds.) Discrepancy Theory, Radon Series on Computational and Applied Mathematics, vol. 26, pp. 43–68, De Gruyter (2020)

    Google Scholar 

  4. Gnewuch, M., Hebbinghaus, N.: Discrepancy Bounds for a Class of Negatively Dependent Random Points Including Latin Hypercube Samples (2021). Preprint on arXiv.org

  5. Graham, R., Knuth, D., Patashnik, O.: Concrete Mathematics: A Foundation for Computer Science, Addison-Wesley (1989)

    Google Scholar 

  6. Niederreiter, H.: Random number generation and Quasi-Monte Carlo methods. In: SIAM CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 63. SIAM, Philadelphia (1992)

    Google Scholar 

  7. Ostrovskii, I.V.: On a problem of A. Eremenko. Comput. Meth. Funct. Th. 4, 275–282 (2004)

    Google Scholar 

  8. Owen, A.B.: Randomly permuted \((t, m, s)\)-nets and \((t, s)\)-sequences. In: Niederreiter, H., Shiue, P.J.-S. (eds.) Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing. Lecture Notes in Statistics, vol. 106, pp. 299–317. Springer, New York (1995)

    Google Scholar 

  9. Owen, A.B.: Scrambling Sobol’ and Niederreiter-Xing points. J. Complex. 14, 466–489 (1998)

    Google Scholar 

  10. Wiart, J., Lemieux, C., Dong, G.: On the dependence structure and quality of scrambled \((t, m, s)\)-nets. Monte Carlo Methods Appl. 27, 1–26 (2021)

    Google Scholar 

  11. Wnuk, M., Gnewuch, M.: Note on pairwise negative dependence of randomized rank-1 lattices. Oper. Res. Lett. 48, 410–414 (2020)

    Google Scholar 

Download references

Acknowledgements

We thank the anonymous reviewers for their detailed comments. The first author thanks NSERC for their support via grant #238959. The second author wishes to acknowledge the support of the Austrian Science Fund (FWF): Projects F5506-N26 and F5509-N26, which are parts of the Special Research Program “Quasi-Monte Carlo Methods: Theory and Applications”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christiane Lemieux .

Editor information

Editors and Affiliations

Appendix: Proofs and Technical Lemmas

Appendix: Proofs and Technical Lemmas

We first prove results stated in the main part of the paper. These proofs make use of Lemmas 10 to 15, which are presented in the second part of the appendix.

Proof (of Lemma 2)

In what follows, we will use the notation \(x_{\ell }\) to represent the \(\ell \)th digit in the base b representation of \(x \in [0,1)\), i.e., \(x=\sum _{\ell \ge 1} x_{\ell } b^{-\ell }\), and the corresponding notation \(x = 0.x_1 x_2 x_3 \ldots \).

First, we decompose A into three parts as \(A_1 = [hb^{-r+1}+gb^{-r}-z,hb^{-r+1}+gb^{-r})\), \(A_2 = [hb^{-r+1}+Gb^{-r}, hb^{-r+1}+Gb^{-r}+Z)\), \(A_3 = [hb^{-r+1}+gb^{-r},hb^{-r+1}+Gb^{-r})\). Hence we have

$$\begin{aligned} V_i(A \times A) = \sum _{\ell =1}^3 V_i(A_{\ell } \times A_{\ell }) + 2 (V_i(A_1 \times A_2)+V_i(A_1\times A_3)+V_i(A_2\times A_3)) , \qquad i \ge 0. \end{aligned}$$
(21)

Since \(A_1\) and \(A_2\) are both completely contained in the respective intervals \([hb^{-r+1}+(g-1)b^{-r},hb^{-r+1}+gb^{-r})\) and \([hb^{-r+1}+Gb^{-r},hb^{-r+1}+(G+1)b^{-r})\), any x in \(A_1\) is of the form \(0.h_1\ldots h_{r-1}(g-1) x_{r+1}x_{r+2} \ldots \). Similarly, \(y \in A_2\) is of the form \(0.h_1\ldots h_{r-1} (G) y_{r+1}y_{r+2} \ldots \). On the other hand, for \(z \in A_3\) we have that \(z_i=h_i\) for \(i \le r-1\), \(z_r \in \{g,\ldots ,G-1\}\), and \(z_{\ell } \ge 0\) for \(\ell >r\). From this we infer:

  1. 1.

    No pair of points from \(A_1\) or \(A_2\) can have less than r initial common digits, thus \(V_i(A_{\ell }\times A_{\ell })=0\) for \(i=0,\dots ,r-1\) and \(\ell =1,2\).

  2. 2.

    No pair of points from \(A_3\) can have less than \(r-1\) initial common digits, thus \(V_i(A_3\times A_3)=0\) for \(i=0,\dots ,r-2\).

  3. 3.

    A pair of points from any two of the following subsets: \( A_1,\, A_2,\, {[h b^{-r+1}+\beta b^{-r},}\) \({h b^{-r+1}+(\beta +1)b^{-r})}\subseteq A_3, \) where \({\beta }=g,\dots , G-1\) has exactly \(r-1\) initial common digits, thus \(V_{r-1}(A_j\times A_{\ell })={{\,\mathrm{Vol}\,}}(A_j){{\,\mathrm{Vol}\,}}(A_{\ell })\) for \(j\ne \ell \), \(V_{r-1}(A_3 \times A_3)=(G-g)(G-g-1) b^{-2r}\) and \(V_i(A_j \times A_{\ell }) = 0\) for \(i \ge r, j \ne \ell \).

Note that this implies that \(\tilde{V}_r(A_i \times A_i) = \mathrm{Vol}^2(A_i)\) for \(i=1,2\), and (using item (3)) \(\tilde{V}_{r}(A_3\times A_3) =\mathrm{Vol}^2(A_3) -V_{r-1}(A_3\times A_3)= (G-g)b^{-2r}\).

The above statements also allow us to simplify (21) as follows:

$$\begin{aligned} V_{r-1}(A\times A)&= V_{r-1}(A_3\times A_3) + 2\sum _{1 \le i< \ell \le 3} V_{r-1}(A_i\times A_{\ell })\nonumber \\&= V_{r-1}(A_3\times A_3) + 2 \sum _{1 \le i < \ell \le 3} \mathrm{Vol}(A_i)\mathrm{Vol}(A_{\ell }) \end{aligned}$$
(22)
$$\begin{aligned} V_i(A\times A)&= \sum _{\ell =1}^3 V_i(A_{\ell }\times A_{\ell }) \quad i \ge r. \end{aligned}$$
(23)

To prove (ii), consider the mappings \(\varphi _j:[0,1) \rightarrow [0,1)\), \(1\le j \le 3\) defined as:

$$\begin{aligned} \varphi _1(hb^{-r+1}+gb^{-r}-x)&= 1-x, \qquad 0 \le x< b^{-r}\\ \varphi _2(hb^{-r+1}+Gb^{-r}+x)&= x, \qquad 0 \le x< b^{-r}\\ \varphi _3(hb^{-r+1}+gb^{-r}+x)&= x, \qquad 0 \le x <(G-g)b^{-r}. \end{aligned}$$

All three are isometric mappings and such that \(\varphi _1(A_1) = [1-z,1)\), \(\varphi _2(A_2) = [0,Z)\), and \(\varphi _3(A_3) = [0,(G-g)b^{-r})\). Also, since \(\varphi _j\) simply amounts to changing the first r digits of a point in \(A_j\) (and applies the same change to all points in \(A_j\)), it implies

$$ \gamma _b(\varphi _j(\nu _{j,\ell }),\varphi _j(\nu _{j,h})) = \gamma _b(\nu _{j,\ell },\nu _{j,h}), \qquad j=1,2,3, $$

where \(\nu _{1,\ell } =hb^{-r+1}+gb^{-r}-x,\nu _{1,h} = hb^{-r+1}+gb^{-r}-y\), \(\nu _{2,\ell } =hb^{-r+1}+Gb^{-r}+x,\nu _{2,h} = hb^{-r+1}+Gb^{-r}+y\), and \(\nu _{3,\ell } =hb^{-r+1}+gb^{-r}+w,\nu _{3,h} = hb^{-r+1}+gb^{-r}+z\). Therefore

$$\begin{aligned} V_i(A_1\times A_1)&= V_i(\varphi _1(A_1) \times \varphi _1(A_1)) = V_i([1-z,1) \times [1-z,1)) = V_i([0,z),[0,z))\\ V_i(A_2 \times A_2)&= V_i(\varphi _2(A_2) \times \varphi _2(A_2)) = V_i([0,Z) \times [0,Z)) \\ V_i(A_3 \times A_3)&= V_i(\varphi _3(A_3) \times \varphi _2(A_3)) = V_i([0,(G-g)b^{-r}) \times [0,(G-g)b^{-r})). \end{aligned}$$

These intervals, being anchored at the origin, satisfy the assumptions of Lemma 2.6 from [10], which implies \(bV_{i+1}(A_j \times A_j) -V_i(A_j \times A_j) \ge 0\) for \(j=1,2,3\) and \(i \ge 0\). Combining this with (23), property (ii) in the statement of Lemma 2 is established.

For (iii), let us first assume \(g=G\), and thus \(A_3 = \emptyset \). Then, using (22), we get \(V_{r-1}(A \times A) = 2\mathrm{Vol}(A_1 \times A_2). \) Furthermore, \(\tilde{V}_r(A,A) = \tilde{V}_r(A_1 \times A_1) + \tilde{V}_r(A_2 \times A_2) = \mathrm{Vol}^2(A_1) + \mathrm{Vol}^2(A_2)\). Since \(2\mathrm{Vol}(A_1 \times A_2) \le \mathrm{Vol}^2(A_1) + \mathrm{Vol}^2(A_2)\), (iii) is proved.

Now assume \(g<G\). In this case, we need to further refine \(A_1\) and \(A_2\) as:

$$\begin{aligned} A_1&= [hb^{-r+1}+gb^{-r}-db^{-(r+1)}-f, hb^{-r+1}+gb^{-r})\\ A_2&= [hb^{-r+1}+Gb^{-r},hb^{-r+1}+Gb^{-r}+Db^{-(r+1)}+F), \end{aligned}$$

where \(0 \le d,D \le b-1\), \(f,F \in [0,b^{-(r+1)})\). Using (22) and (23), we then write

$$\begin{aligned} V_{r-1}(A \times A)&= V_{r-1}(A_3 \times A_3) + 2 \sum _{1 \le i < \ell \le 3} \mathrm{Vol}(A_i)\mathrm{Vol}(A_{\ell }) \\&= \frac{(G-g)(G-g-1)}{b^{2r}}+2 \left( \left( \frac{d}{b^{r+1}}+f\right) \left( \frac{D}{b^{r+1}}+F\right) \right. \\&\qquad \left. + \frac{G-g}{b^r}\left( \frac{d}{b^{r+1}}+f\right) +\frac{G-g}{b^r} \left( \frac{D}{b^{r+1}}+F\right) \right) \end{aligned}$$
$$\begin{aligned} V_r(A \times A)&= \sum _{\ell =1}^3 V_r(A_{\ell } \times A_{\ell }) = V_r(A_1 \times A_1)+V_r(A_2 \times A_2) + \frac{G-g}{b^{2r}} \frac{b-1}{b} \\&= \frac{(d-1)d}{b^{2(r+1)}} + \frac{2fd}{b^{r+1}}+\frac{(D-1)D}{b^{2(r+1)}} + \frac{2FD}{b^{r+1}} + \frac{G-g}{b^{2r}} \frac{b-1}{b} \\ \tilde{V}_r(A \times A)&= \left( \frac{d}{b^{r+1}}+f\right) ^2 + \left( \frac{D}{b^{r+1}}+F\right) ^2 + \frac{G-g}{b^{2r}}. \end{aligned}$$

The last equality for \(V_r(A \times A)\) is obtained by observing that for (xy) to be in \(V_r(A_1 \times A_1)\), either (i) \(x=0.h_1\ldots h_{r-1}(g-1)d_1\ldots \) and \(y=0.h_1\ldots h_{r-1}(g-1)d_2\ldots \) with \(d_1 \ne d_2 \in \{0,\ldots ,d-1\}\), or (ii) one of them is of the form \(z_1 + (0.h_1\ldots h_{r-1}(g-1)d) \) with \(z_1 \in [0,f)\) and the other is of the form \(0.h_1\ldots h_{r-1}(g-1)d_1\ldots \) with \(d_1 \in \{0,\ldots ,d-1\}\). Case (i) contributes a volume of size \((d-1)d b^{-2(r+1)}\) and case (ii) contributes \(2fdb^{-(r+1)}\). A similar argument can be used to derive \(V_r(A_2 \times A_2)\).

Therefore \(V_{r-1}(A \times A) -\frac{b(b-2)}{b-1}V_r(A \times A) \le \tilde{V}_r(A \times A)\) holds if

$$\begin{aligned}&\frac{(G-g)(G-g-1)}{b^{2r}}+2 \left( \frac{G-g}{b^r} \left( \frac{d+D}{b^{r+1}}+f+F\right) \right) + 2\left( \frac{d}{b^{r+1}}+f\right) \left( \frac{D}{b^{r+1}}+F\right) \nonumber \\ -&\frac{b(b-2)}{b-1} \left( \frac{(d-1)d}{b^{2(r+1)}} + \frac{2fd}{b^{r+1}}+\frac{(D-1)D}{b^{2(r+1)}} + \frac{2FD}{b^{r+1}}+ \frac{G-g}{b^{2r}} \frac{b-1}{b}\right) \nonumber \\ \le&\left( \frac{d}{b^{r+1}}+f\right) ^2 + \left( \frac{D}{b^{r+1}}+F\right) ^2 + \frac{G-g}{b^{2r}}. \end{aligned}$$
(24)

Since

$$ 2\left( \frac{d}{b^{r+1}}+f\right) \left( \frac{D}{b^{r+1}}+F\right) \le \left( \frac{d}{b^{r+1}}+f\right) ^2 + \left( \frac{D}{b^{r+1}}+F\right) ^2 $$

it means that to prove (24) it is sufficient to show that

$$\begin{aligned}&\frac{(G-g)(G-g-1)}{b^{2r}} - (b-2) \frac{G-g}{b^{2r}}+2 \left( \frac{G-g}{b^r} \left( \frac{d+D}{b^{r+1}}+f+F\right) \right) \\&-\frac{b(b-2)}{b-1} \left( \frac{(d-1)d}{b^{2(r+1)}} + \frac{2fd}{b^{r+1}}+\frac{(D-1)D}{b^{2(r+1)}} + \frac{2FD}{b^{r+1}} \right) \le \frac{G-g}{b^{2r}}, \end{aligned}$$

or equivalently, that

$$\begin{aligned}&-b^2 (G-g)(b-(G-g)) +2b (G-g) \left( d+D+b^{r+1}(f+F)\right) \nonumber \\&-\frac{b(b-2)}{b-1} \left( (d-1)d+ 2fdb^{r+1}+(D-1)D+ 2FDb^{r+1}\right) \le 0. \end{aligned}$$
(25)

Note that \(G-g \le b-2\) by assumption. We proceed by considering three cases:

Case 1: \(G-g \le b-4\). This implies \(b-(G-g) \ge 4\) (and thus \(b \ge 4\)). Also, to handle this case we use the fact that \(0 \le fb^{r+1},Fb^{r+1} < 1\). By making appropriate substitutions for f and F, we see that to prove (25) holds it is sufficient to show that

$$ -4b^2(G-g)+2b(G-g)(d+D+2) - \frac{b(b-2)}{b-1}(d(d-1)+D(D-1)) \le 0 $$

which holds because \(d+D+2 \le 2b\).

Case 2: \(G-g = b-3\) First note that this implies \(b \ge 3\). Next, we replace \(G-g\) with \(b-3\) in (25) and divide each term by b. For this case, we can use the bound \(0 \le fb^{r+1},Fb^{r+1} <1\) and by substituting appropriately, it means it is sufficient to show

$$ -3b(b-3)+2(b-3)(d+D+2)-\frac{b-2}{b-1}(d(d-1)+D(D-1)) \le 0. $$

We view the LHS as the sum of two quadratic polynomials, p(d) and p(D), and thus argue it is sufficient to show that

$$ p(d) := \frac{-(b-2)}{b-1} d^2 + d \left( 2(b-3) +\frac{b-2}{b-1} \right) -(b-3)(3b/2-2) \le 0. $$

We will show this holds by finding the value \(d_{max}\) of d that maximizes p(d) and show that \(p(d_{max}) \le 0\). We have that

$$ p'(d) = -2d\frac{b-2}{b-1} + 2(b-3) +\frac{b-2}{b-1}. $$

Therefore

$$ d_{max} = \left( 2(b-3) +\frac{b-2}{b-1} \right) \frac{b-1}{2(b-2)} = \frac{(b-3)(b-1)}{b-2} + \frac{1}{2}. $$

Hence \(d_{max} \in (b-2.5,b-1.5)\). Thus it is sufficient to show \(p(b-2) \le 0\). Now,

$$ p(b-2) = -\frac{(b-2)^3}{b-1} + (b-2)\left( 2(b-3)+\frac{b-2}{b-1}\right) -(b-3)(3b/2-2) $$

therefore

$$\begin{aligned} (b-1)p(b-2)&=-(b-2)^3+2(b-1)(b-2)(b-3)+(b-2)^2\\ {}&\qquad -\left( \frac{3b}{2}-2\right) (b-3)(b-1) \\&=(3-b)(b^2-3b+4)/2=(3-b)(b(b-3)+4)/2\le 0 \end{aligned}$$

since \(b \ge 3\).

Case 3: \(G-g = b-2\)

In this case (25) becomes

$$\begin{aligned}&-2b^2(b-2)+2b(b-2)(d+D+(f+F)b^{r+1})\\&- \frac{b(b-2)}{b-1} \left( d(d-1)+D(D-1) + 2b^{r+1} (fd+FD)\right) \le 0 \\ \Leftrightarrow&-2b+2(d+D+(f+F)b^{r+1}) \\&\qquad - \frac{1}{b-1} \left( d(d-1)+D(D-1) + 2b^{r+1} (fd+FD)\right) \le 0. \end{aligned}$$

As in the case \(G-g=b-3\), we argue it is sufficient to show each of the quadratic polynomials in d and D on the LHS (which are the same) is bounded from above by 0. That is, we need to show

$$ q(d) := \frac{-d^2}{b-1} + d\left( 2+\frac{1}{b-1}-\frac{2fb^{r+1}}{b-1} \right) -(b-2fb^{r+1}) \le 0. $$

Now

$$ q'(d) = \frac{-2d}{b-1} +2 + \frac{1}{b-1}-\frac{2fb^{r+1}}{b-1} $$

and thus \(d_{max} = \left( 2 + \frac{1}{b-1}-\frac{2fb^{r+1}}{b-1} \right) \frac{b-1}{2} = b-0.5-fb^{r+1}\), which implies \(d_{max} \in (b-1.5, b-0.5)\). Thus it is sufficient to show \(q(b-1) \le 0\). We have that

$$\begin{aligned} q(b-1)&= -(b-1)+(b-1)\left( 2 + \frac{1}{b-1}-\frac{2fb^{r+1}}{b-1}\right) -(b-2fb^{r+1}) \\&=(b-1)+1-2fb^{r+1}-b+2fb^{r+1} = 0 \end{aligned}$$

as required. \(\square \)

Proof (of Theorem 1)

To simplify the notation, we define \(Z_k := b^{2k}V(1_k \times I_k)\) and (\( W_k^{(r-1)} := (b^{2(k+r+1)}/4)V(Y_k^{(r-1)} \times Y_k^{(r-1)})\) to be the (normalized) volume vectors of \(k-\)elementary intervals and elementary unanchored \((k,r-1)-\)intervals, respectively. Since the coordinates of each of these vectors are positive and sum to one, \(V(A\times A)=\sum _{k\ge 0}(\alpha _k+\tau _k)\) follows immediately from the last equality in the statement of the theorem.

Based on the definition of \(Y_k^{({r-1})}\), the vectors \(W_k^{({r-1})}\) satisfy, for \(i\ge 0\), \(r \ge 1\),

$$ W_{i,k}^{({r-1})} = {\left\{ \begin{array}{ll} 1/2 &{} \text{ if } i=r-1 \\ (b-1)/2b^{i-(k+{r})} &{} \text{ if } i \ge k+{r+1}\\ 0 &{} \text{ otherwise, } \end{array}\right. } $$

while

$$ Z_{i,k} = {\left\{ \begin{array}{ll} 0 &{} \text{ if } i<k \\ (b-1)/b^{i-k+1} &{} \text{ if } i \ge k. \end{array}\right. } $$

Note that

$$\begin{aligned} W_{r-1,k}^{(r-1)}&= 1/2 \text{ for } \text{ all } k \ge 0 \end{aligned}$$
(26)
$$\begin{aligned} W_{i,k}^{(r-1)}&= Z_{i,k+r+1}/2 \text{ for } i \ge k+r+1. \end{aligned}$$
(27)

There are two cases to consider. First, if \(V_{r-1}(A \times A) \le bV_r(A \times A)\), then based on Properties (i) and (ii) from Lemma 2 we can decompose \(V(A \times A)\) solely with the \(Z_k\)’s , i.e., we set \(\alpha _k = 0\) for all k and

$$ \tau _k = \frac{bV_k(A \times A)-V_{k-1}(A \times A)}{b-1} \quad k \ge r -1 $$

and \(\tau _k = 0\) if \(0\le k\le r-2\). Note that \(A=[0,1)\) fits into this first case.

Second, if \(V_{r-1}(A \times A) \ge bV_r(A \times A)\), then we first decompose the vector \(\sum _{k=r}^{\infty }V_k(A \times A) \mathbf {e}_k\), where \(\mathbf {e}_k\) is a (canonical) vector of zeros with a 1 in position k, (note that this agrees with the vector \(V(A \times A)\) everywhere except on index \(r-1\), where it has a 0 instead of \(V_{r-1}(A \times A)\)) as

$$ \sum _{k=r}^{\infty }V_k(A \times A) \mathbf {e}_k = \sum _{k=r}^{\infty } \bar{\tau }_k Z_k $$

with \(\bar{\tau }_r = bV_r(A \times A)/(b-1) \ge 0\), \(\bar{\tau }_k=0\) if \(k <r\) (from Part (i) of Lemma 2), and

$$ \bar{\tau }_k = \frac{bV_k(A \times A)-V_{k-1}(A \times A)}{b-1} \quad k \ge r +1. $$

From Lemma 2 we know \(\bar{\tau }_k \ge 0\) for \(k \ge r+1\). Note that \(b\bar{\tau }_r Z_{r-1}- \bar{\tau }_r Z_r = bV_r(A \times A) \mathbf {e}_{r-1}\), i.e., \(b\bar{\tau }_r Z_{r-1}\) agrees with \(\bar{\tau }_r Z_r\) everywhere except on index \(r-1\). Therefore

$$ V(A \times A) - b\bar{\tau }_r Z_{r-1} - \sum _{k=r+1}^{\infty } \bar{\tau }_k Z_k = (V_{r-1}(A \times A)-bV_r(A \times A)) \mathbf {e}_{r-1}. $$

Hence the \(\sum _{k = 0}^{\infty } \alpha _k W_k^{(r-1)} \) part of the decomposition is only needed to decompose \(V_{r-1}(A \times A)-bV_r(A \times A)\). We claim there exists \(\bar{\alpha }_k \ge 0\), \(k \ge r+1\) such that

$$\begin{aligned} V_{r-1}(A \times A)-bV_r(A \times A) = \sum _{k=r+1}^{\infty } \bar{\alpha }_k/2, \end{aligned}$$
(28)

and such that \(\bar{\alpha }_k/2 \le \bar{\tau }_k\) for \(k \ge r+1\). This can be seen using Part (iii) of Lemma 2. Indeed, to ensure the existence of these \(\bar{\alpha }_k\)’s, we need to prove that

$$\begin{aligned} V_{r-1}(A \times A)-bV_r(A \times A) \le \sum _{k=r+1} \bar{\tau }_k. \end{aligned}$$
(29)

Now,

$$\begin{aligned} \sum _{k=r+1} \bar{\tau }_k&= \sum _{k=r+1}^{\infty } \frac{bV_k(A \times A)-V_{k-1}(A \times A)}{b-1} \\&= \tilde{V}_{r+1}(A \times A) - \frac{V_r(A \times A)}{b-1} = \tilde{V}_r(A \times A) -V_r(A \times A) -\frac{V_r(A \times A)}{b-1}. \end{aligned}$$

Therefore, (29) holds if and only if

$$ V_{r-1}(A \times A)-bV_r(A \times A) \le \tilde{V}_r(A \times A) -V_r(A \times A) -\frac{V_r(A \times A)}{b-1} $$

which holds if and only if \( V_{r-1}(A \times A) \le \tilde{V}_r(A \times A) + \frac{b(b-2)}{b-1}V_r(A \times A), \) which is precisely what Part (iii) of Lemma 2 shows. Having proved the existence of non-negative coefficients \(\bar{\alpha }_k\) satisfying (28) implies we can write \( V_{r-1}(A \times A)-bV_r(A \times A) = \sum _{k=r+1}^{\infty } \bar{\alpha }_k W_{r-1,k-(r+1)}^{(r-1)} \) by using (26). Hence all that is left to do is to find the combination of \(Z_k\)’s that can cancel out \(\sum _{k=r+1}^{\infty } \bar{\alpha }_k W_{i,k-(r+1)}^{(r-1)}\) for \(i \ge r+1\) (we can ignore the case \(i=r\) because \(W_{r,k-(r+1)}^{(r-1)}=0\) for all \(k \ge r+1\)). This is done by using (27), which implies that

$$ \sum _{k=r+1}^{\infty } \bar{\alpha }_k W_{i,k-(r+1)}^{(r-1)} =\sum _{k=r+1}^{\infty } (\bar{\alpha }_k/2) Z_{i,k}. $$

Hence the final decomposition is given by

$$\begin{aligned} V(A \times A)&= \sum _{k=r+1}^{\infty } \bar{\alpha }_k W_{k-(r+1)}^{(r-1)} + b\bar{\tau }_r Z_{r-1} + \sum _{k=r+1}^{\infty } (\bar{\tau }_k-\bar{\alpha }_k/2) Z_k\\&= \sum _{k=0}^{\infty } \alpha _k W_k^{(r-1)} + \sum _{k=0}^{\infty } \tau _k Z_k \end{aligned}$$

with \(\alpha _k = \bar{\alpha }_{k+(r+1)}\), \(k \ge 0\), \(\tau _k = \bar{\tau }_k - \bar{\alpha }_k/2\), \(k \ge r+1\), \(\tau _{r-1} = b \bar{\tau }_r\) and \(\tau _k =0\) for \(0 \le k \le r-2, k=r\). \(\square \)

Proof (of Lemma 3)

(1) From the definition of \(D(\boldsymbol{k},\boldsymbol{d},J)\), we have that

$$ \mathrm{Vol}(D(\boldsymbol{k},\boldsymbol{d},J)) = b^{-2|\boldsymbol{k}|_{J^c}}2^{2|J|}b^{-2(|\boldsymbol{k}+\boldsymbol{d}+2|_J)}= 2^{2|J|} b^{-2(|\boldsymbol{k}|+|\boldsymbol{d}+2|_J)}. $$

(2) Similarly, from the definition of \(F_1(\boldsymbol{k},\boldsymbol{d},J,I)\) we get

$$ \mathrm{Vol}(F_1(\boldsymbol{k},\boldsymbol{d},J,I)) = b^{-|\boldsymbol{k}|_{J^c}}b^{-(|\boldsymbol{k}+\boldsymbol{d}+2|_J)} = b^{-(|\boldsymbol{k}|+|\boldsymbol{d}+2|_J)}. $$

(3) This conditional probability is given by \(\eta /n-1\), where \(\eta \) is the number of points \(\mathbf {u}_{\ell }\) with \(\ell \ne i\) that are in \(F_2\) if \(\mathbf {u}_i \in F_1\). Hence for \(j \in I\) we must have \(\gamma _b(u_{i,j},u_{\ell ,j}) \ge k_j+d_j+2\); for \(j \in I^c\) we must have \(\gamma _b(u_{i,j},u_{\ell ,j}) \ge k_j\). For \(j \in J \cap I^c\), the requirement that \(u_{i,j} \in F_1\) means \(u_{\ell ,j}\) must satisfy:

  1. (a)

    it must have the same first \(d_j\) digits as \(u_{i,j}\);

  2. (b)

    its \((d_j+1)\)th digit must be 1 (while the \((d_j+1)\)th digit of \(u_{i,j}\) is 0);

  3. (c)

    the digits \(u_{\ell ,j,r}\) for \(d_j+2 \le r d_j+k_j+2\) must be 0

If we only had to satisfy requirement (a), then we would have \(\eta = m_b(\boldsymbol{k},\boldsymbol{d},2,J,I)\). However, the requirements (b) and (c) imply

$$ \eta = m_b(\boldsymbol{k},\boldsymbol{d},2,J,I) \prod _{j \in J \cap I^c} \frac{1}{b-1} \frac{1}{b^{k_j+1}}, $$

where the term \(1/(b-1)\) handles restriction (b) while the term \(b^{-k_j-1}\) handles restriction (c). Therefore

$$\begin{aligned} P(\mathbf {V}\in F_2(\boldsymbol{k},\boldsymbol{d},J,I)|\mathbf {U}\in F_1(\boldsymbol{k},\boldsymbol{d},J,I))&= \frac{m_b(\boldsymbol{k},\boldsymbol{d},2,J,I)}{n-1} \left( \frac{1}{b-1}\right) ^{|J|-|I|} \frac{1}{b^{|\boldsymbol{k}+1|_{J \cap I^c}}}\\&= \frac{m_b(\boldsymbol{k},\boldsymbol{d},2,J,I)}{n-1} \frac{(b-1)^{|I|-|J|}}{b^{|\boldsymbol{k}|_{J \cap I^c}+|J|-|I|}}. \end{aligned}$$

(4) The decomposition \(\cup _{K,I \subseteq J} F(\boldsymbol{k},\boldsymbol{d},J,I,K)\) is obtained by expanding each \(Y_{k_j}^{(d_j)}\) as \(Y_{k_j,1}^{(d_j)} \cup Y_{k_j,2}^{(d_j)}\). Then, we need to prove that for \(K_1,K_2 \subseteq J\),

$$\begin{aligned} P(\mathbf {U},\mathbf {V}) \in F(\boldsymbol{k},\boldsymbol{d},J,I,K_1) = P(\mathbf {U},\mathbf {V}) \in F(\boldsymbol{k},\boldsymbol{d},J,I,K_2). \end{aligned}$$
(30)

Noting that the equality (7) can be generalized to \(P((\mathbf {U},\mathbf {V}) \in {\mathcal R}) =\sum _{\boldsymbol{ i}\ge \mathbf {0}} \psi _{\boldsymbol{ i}}V_{\boldsymbol{ i}}({\mathcal R})\), it is clear that to prove (30), it is sufficient to show that the volume vectors corresponding to \(F(\boldsymbol{k},\boldsymbol{d},J,I,K_1)\) and \(F(\boldsymbol{k},\boldsymbol{d},J,I,K_2)\) are equal. To do so, since each entry \(V_{\boldsymbol{ i}}({\mathcal R}) = \prod _{j=1}^s V_{i_j}({\mathcal R}_j)\) (where for \({\mathcal R} = {\mathcal R}_1 \times {\mathcal R}_2\) we write \({\mathcal R}_j = {\mathcal R}_{1,j} \times {\mathcal R}_{2,j}\)), it is sufficient to show that for fixed k and d, \(V_{11}:= V(Y_{k,1}^{(d)} \times Y_{k,1}^{(d)}) = V(Y_{k,2}^{(d)} \times Y_{k,2}^{(d)})=: V_{2,2}\) and \(V_{12} := V(Y_{k,1}^{(d)} \times Y_{k,2}^{(d)})= V(Y_{k,2}^{(d)} \times Y_{k,1}^{(d)})=: V_{2,1}\). But this follows from an argument similar to the one used in the proof of Lemma 4.14 in [10], which we adapt for our setup. First we introduce the set \({\mathcal F} = \{ab^{-k}: a\in \mathbb {Z},k \in \mathbb {N}\} \subseteq \mathbb {R}\) which has Lebesgue measure 0. Then, we argue that for \(x,y \in (0,b^{-(k+d+2)}) \cap {\mathcal F}^c\), we have

$$\begin{aligned} \gamma _b\left( \frac{1}{b^{d+1}}-x,\frac{1}{b^{d+1}}-y\right)= & {} \gamma _b\left( \frac{1}{b^{d+1}}+x,\frac{1}{b^{d+1}}+y\right) \\ \gamma _b\left( \frac{1}{b^{d+1}}-x,\frac{1}{b^{d+1}}+y\right)= & {} \gamma _b\left( \frac{1}{b^{d+1}}+x,\frac{1}{b^{d+1}}-y\right) . \end{aligned}$$

Therefore, for \((x,y) \in Y_{k}^{(d)} \times Y_k^{(d)} \), \(D_i\) is, up to a set of measure 0, invariant under the transformation \((x,y) \mapsto \left( \frac{2}{b^{d+1}}-x,\frac{2}{b^{d+1}}-y\right) \). This transformation maps \(Y_{k,1}^{(d)} \times Y_{k,1}^{(d)}\) to \(Y_{k,2}^{(d)} \times Y_{k,2}^{(d)}\) (and vice-versa) and \(Y_{k,1}^{(d)} \times Y_{k,2}^{(d)}\) to \(Y_{k,2}^{(d)} \times Y_{k,1}^{(d)}\) (and vice-versa). Therefore \(V_{11}=V_{22}\), and \(V_{12} = V_{21}\), as required. \(\square \)

Proof (of Lemma 4)

First, if \(\ell = 2i+1\), then

$$ g_{i,j}(\ell ) = \left( \frac{b}{b-1}\right) ^{j-i-1} \le \left( \frac{b}{b-1}\right) ^{i+j-\ell } $$

and

$$ \left( \frac{b}{b-1}\right) ^{i+j-\ell } = \left( \frac{b}{b-1}\right) ^{i+1-j} $$

so in this case we actually have an equality, i.e.,

$$ g_{i,j}(\ell ) = \left( \frac{b}{b-1}\right) ^{i+j-\ell }. $$

If \(2i< \ell < i+j\) and \(\ell \) is even, then

$$ \left( \frac{b}{b-1}\right) ^{i+j-\ell } \ge 1 = g_{j,i}(\ell ) $$

since \(\ell < i+j\). If \(2i< \ell < i+j\) and \(\ell \) is odd, then we must show that

$$\begin{aligned} 1 + \frac{b^{j+i-\ell }}{(b-1)^{j-i}} \left( {\begin{array}{c}j-i-1\\ \ell -2i\end{array}}\right) \le \left( \frac{b}{b-1}\right) ^{i+j-\ell }. \end{aligned}$$
(31)

Now let \(k = \ell -2i\) and \(r=j-i\). This means k is odd and \(k>1\), and also \(k < r \le j \le s\) which means \(r \ge 4\). Using this notation, (31) is equivalent to

$$\begin{aligned} b^k \left( \frac{b-1}{b}\right) ^r +\left( {\begin{array}{c}r-1\\ k\end{array}}\right) \le (b-1)^k \text{ and } \text{ thus } \text{ to } \left( \frac{b-1}{b}\right) ^{r-k} + \frac{\left( {\begin{array}{c}r-1\\ k\end{array}}\right) }{(b-1)^k} \le 1. \end{aligned}$$
(32)

Now,

$$ \left( \frac{b-1}{b}\right) ^{r-k} = \sum _{j=0}^{r-k} \left( {\begin{array}{c}r\\ j\end{array}}\right) \left( \frac{-1}{b}\right) ^{j} $$

and \(\left( {\begin{array}{c}r\\ j\end{array}}\right) /b^j\) is decreasing with j, therefore

$$ \left( \frac{b-1}{b}\right) ^{r-k} \le 1 - \frac{r-k}{b}+\frac{(r-k)(r-k-1)}{2b^2}, $$

because the condition that \(k>1\) and k is odd implies \(k \ge 3\). Therefore a sufficient condition for the second inequality in (32) to hold is if we have

$$\begin{aligned}&1-\frac{r-k}{b}+\frac{(r-k)(r-k-1)}{2b^2} + \frac{(r-1)(r-2)\ldots (r-k)}{(b-1)(b-1)\ldots (b-1)} \frac{1}{k!} \le 1 \nonumber \\ \text{ which } \text{ holds } \text{ iff } \,\,&\frac{(r-k)(r-k-1)}{2b^2} + \frac{(r-1)(r-2)\ldots (r-k)}{(b-1)(b-1)\ldots (b-1)} \frac{1}{k!} \le \frac{r-k}{b} \nonumber \\ \text{ which } \text{ holds } \text{ iff } \,\,&\frac{r-k-1}{2b} + \frac{(r-1)(r-2)\ldots (r-k+1)}{(b-1)(b-1)\ldots (b-1)} \frac{b}{b-1}\frac{1}{k!} \le 1. \end{aligned}$$
(33)

Now,

$$ \frac{r-k-1}{2b}< \frac{1}{2} \Leftrightarrow r-k-1 < b, $$

and the latter inequality holds since \(b \ge s \ge r\) and \(k \ge 3\). Also,

$$ \frac{r-k+1}{2(b-1)} \le \frac{1}{2} \Leftrightarrow r-k+1 \le b -1 \Leftrightarrow r+2-k \le b $$

and the latter inequality holds since \(k \ge 3\) and \(b \ge s \ge r\). Therefore, for \(k \ge 3\) the LHS of (33) is bounded (strictly) from above by

$$ \frac{1}{2} + \frac{1}{2} \frac{(r-1)(r-2)\ldots (r-k+2)}{(b-1)(b-1)\ldots (b-1)} \frac{b}{b-1} \frac{1}{k(k-1)\ldots 3} < 1 $$

because

$$ \frac{(r-1)(r-2)\ldots (r-k+2)}{(b-1)(b-1)\ldots (b-1)} \le 1 $$

since \(b \ge s \ge r\) and

$$ \frac{b}{b-1} \frac{1}{k(k-1)\ldots 3} \le \frac{4}{3} \times \frac{1}{3} < 1 $$

since \(b/(b-1)\) decreases with b, and the condition \(1<k<r\le s\) with \(k \ge 3\) means we can assume \(b \ge s \ge r \ge 4\). This proves that (32) holds. \(\square \)

Proof (of Lemma 5)

(i) Using Lemma 12 with \(c=2\) (which implies \(c|I|+|I^{*c}| = |J|+|I|\)), we first consider the case where \(m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J \ge |I|+|J|\). In this case

$$ m(\boldsymbol{k},\boldsymbol{d},2,J,I) = (b-1)^{|J|-|I|} b^{m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J-|J|-|I| }. $$

Hence if \(m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J \ge |I|+|J|\), then

$$\begin{aligned} \psi _m(\boldsymbol{k},\boldsymbol{d},J,I)= & {} \frac{(b-1)^{|J|-|I|}}{n-1} b^{m-|\boldsymbol{k}|_{I^*}- |\boldsymbol{d}|_J -|J|-|I| } b^{|\boldsymbol{k}|_{I^*}+|\boldsymbol{d}|_J +|J|+|I|}(b-1)^{|I|-|J|} \\= & {} \frac{b^{m}}{b^m-1}. \end{aligned}$$

(ii) Next, again based on Lemma 12, we consider the case \(2|I| +1<m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J < |I|+|J|\). First, if \(m-|\boldsymbol{k}|_{I^c}-|\boldsymbol{d}|_J\) is odd and \(m - |\boldsymbol{k}|_{I^c}-|\boldsymbol{d}|_J>2|I|+1\), then

$$\begin{aligned} m(\boldsymbol{k},\boldsymbol{d},2,J,I)&\le \left( b^{m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J-2|I| } \left( \frac{b-1}{b}\right) ^{|J|-|I|} +\left( {\begin{array}{c}|J|-|I|-1\\ m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J-2|I|\end{array}}\right) \right) . \end{aligned}$$
(34)

Hence in that case

$$\begin{aligned} \psi _m(\boldsymbol{k},\boldsymbol{d},J,I)&\le \frac{1}{b^m-1} b^{|\boldsymbol{k}|_{I^*}+|\boldsymbol{d}|_J+|J|+|I|}(b-1)^{|I|-|J|} \\&\qquad \cdot \left( b^{m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J-2|I| } \left( \frac{b-1}{b}\right) ^{|J|-|I|} +\left( {\begin{array}{c}|J|-|I|-1\\ m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J-2|I|\end{array}}\right) \right) \\&= \frac{1}{b^m-1} \left( b^m +b^{|\boldsymbol{k}|_{I^*}+|\boldsymbol{d}|_J+|J|+| I |}(b-1)^{|I|-|J|} \left( {\begin{array}{c}|J|-|I|-1\\ m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J-2|{I}|\end{array}}\right) \right) \\&= \frac{b^m}{b^m-1} \left( 1+\frac{b^{|J|+ |I| -(m -|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J)}}{(b-1)^{|J|-|I|} } \left( {\begin{array}{c}|J|-|I|-1\\ m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J-2|{I}|\end{array}}\right) \right) . \end{aligned}$$

A similar calculation shows that if \(m - |\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J\) is even then

$$ \psi _m(\boldsymbol{k},\boldsymbol{d},J,I) \le \frac{b^m}{b^m-1}. $$

If \(m - |\boldsymbol{k}|_{I^c}-|\boldsymbol{d}|_J-2|I|=1\), then from Lemma 12 we know that \(m(\boldsymbol{k},\boldsymbol{d},2,J,I) =(b-1)\), and thus

$$\begin{aligned} \psi _m(\boldsymbol{k},\boldsymbol{d},J,I)= & {} \frac{1}{b^m-1} b^{|\boldsymbol{k}|_{I^*}+|\boldsymbol{d}|_J+|J|+|I|}(b-1)^{|I|-|J|} (b-1) \\= & {} \frac{b^m}{b^m-1} b^{|J|-|I|-1}(b-1)^{1+|I|-|J|} = \frac{b^m}{b^m-1} \left( \frac{b}{b-1}\right) ^{|J|-|I|-1}. \end{aligned}$$

Combining these three cases, we get that for \(2|I|< m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J < |J|+|I|\),

$$ \psi _m(\boldsymbol{k},\boldsymbol{d},J,I) = \frac{b^m}{b^m-1} g_{|J|,|I|}(m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}_J) $$

with

where

$$ h_{j,i}(\ell ) = \frac{b^{j+i-\ell }}{(b-1)^{j-i}} \left( {\begin{array}{c}j-i-1\\ \ell -2i\end{array}}\right) . $$

(iii) When \(m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J \le 2|I|\), then \(m(\boldsymbol{k},\boldsymbol{d},2,J,I)=0\) and therefore we can set \(g_{|J|,|I|}(m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_J) =0\). \(\square \)

Proof (of Lemma 7)

First we observe that

$$ R(b,s) = \left( \frac{b}{2(b-1)}\right) ^s P_{\tilde{s},s} ((b-1)/b), $$

where \(P_{m,n}(z)\) is the polynomial defined in Lemma 10, and recall that \(\tilde{s} = \lfloor s/2 \rfloor -1.\) In the notation of (41), \(z=(b-1)/b\), \(z/(z+1) = (b-1)/(2b-1)\), and \(1+z = (2b-1)/b\). Therefore

$$ R(b,s) = \left( \frac{b}{2(b-1)}\right) ^s \left( \frac{2b-1}{b}\right) ^s Pr\left( X > \frac{b-1}{2b-1}\right) , $$

where X is a Beta rv with parameters \(\tilde{s}+1,s-\tilde{s}\). Now, it is known that a beta distribution with parameters ac such that \(1<a<c\) has a median no larger than \(a/(a+c)\). Therefore, if we can show that

$$ \frac{\tilde{s}+1}{s+1} \le \frac{b-1}{2b-1}, $$

then it means \(Pr(X > (b-1)/(2b-1)) \le 1/2\). Now,

$$ \frac{b-1}{2b-1} = \frac{1}{2} - \frac{1}{2(2b-1)} \ge \frac{1}{2} - \frac{1}{2(2s-1)}. $$

On the other hand,

$$ \frac{\tilde{s}+1}{s+1} \le \frac{s}{2(s+1)} = \frac{1}{2} - \frac{1}{2(s+1)}. $$

Since \(2(2s-1) \ge 2(s+1)\) for \(s \ge 2\), we have that

$$ \frac{\tilde{s}+1}{s+1} \le \frac{1}{2} - \frac{1}{2(s+1)} \le \frac{1}{2} -\frac{1}{2(2s-1)} \le \frac{b-1}{2b-1}, $$

as required. So the last step is to show that

$$\begin{aligned} \frac{1}{2} \left( \frac{b}{2(b-1)}\right) ^s \left( \frac{2b-1}{b}\right) ^s \le 1. \end{aligned}$$
(35)

The inequality (35) is equivalent to

$$ \left( \frac{2b-1}{2b-2}\right) ^s \le 2, \text{ to } s\ln \frac{2b-1}{2b-2} \le \ln 2, \text{ and } \text{ finally } \text{ to } s \ln (1 + \frac{1}{2(b-1)}) \le \ln 2. $$

Now, \(\ln (1 + \frac{1}{2(b-1)}) \le \ln (1 + \frac{1}{2(s-1)}) \le \frac{1}{2(s-1)}\), hence it is sufficient to show that

$$ \frac{1}{2(s-1)} \le \frac{\ln 2}{s} \Leftrightarrow \frac{s}{s-1} \le 2 \ln 2 = 1.3862... $$

which holds for any \(s \ge 4\). For \(s=2\), we have that \(R(b,2) = 0.25(b/(b-1))^2\) and since \(b \ge s\) we have that \(b/(b-1) \le 2\), which implies \(0.25(b/(b-1))^2 \le 1\) for all \(b \ge 2\). When \(s=3\), then \(R(b,3) = 0.125 (b/(b-1))^3 \le 1\). \(\square \)

Proof (of Lemma 8)

We have that \((m,\mathbf {0}) \in {\mathcal B}\) is equivalent to assuming \(|J|\le m < 2|J|\). We will deal with the case \(m = 2|J|-1\) separately, and will first assume \(|J| \le m \le 2|J|-3\) and m is odd.

(i) Case where \(|J| \le m \le 2|J|-3\) and m is odd:

In this case, \(m^*=m > 2|I|\) if and only if \(|I| < 0.5m\), i.e., \(|I| \le 0.5(m-1)\). Also, \(m^*=m < |I|+|J|\) if and only if \(|I|>m-|J|\). So \(G(m,s,J,\mathbf {0},\mathbf {0})\) is of the form

(36)

Using Lemma 14 with \(s=|J|\), we know that (36) is increasing with m, so \(G(m,s,J,\mathbf {0},\mathbf {0}) \le G(2|J|-3,s,J,\mathbf {0},\mathbf {0})\) for all \(m \le 2|J|-3\) such that \((m,\mathbf {0}) \in {\mathcal B}\). Furthermore, we have that

$$\begin{aligned} G(2|J|-3,s,J,\mathbf {0},\mathbf {0})&= \sum _{j=0}^{|J|-3} \left( {\begin{array}{c}|J|\\ j\end{array}}\right) + \left( {\begin{array}{c}|J|\\ 2\end{array}}\right) \left( {\begin{array}{c}1\\ 1\end{array}}\right) \frac{b}{(b-1)} \\&= 2^{|J|}-1-|J| +\frac{|J|(|J|-1)}{2} \left( \frac{b}{b-1}-1 \right) \\&= 2^{|J|}-1-|J| +\frac{|J|(|J|-1)}{2(b-1)} \le 2^{|J|}-1 - \frac{|J|}{2}, \end{aligned}$$

where the last inequality is obtained by observing that \(|J| \le s \le b\).

(ii) Case where \(m=2|J|-1\).

In this case, \(m \ge |I|+|J|\) for all I such that \(|I| \le |J|-2\). Also, for subsets I such that \(|I| = |J|-1\), we cannot have \(2|I|< m < |I|+|J|\) since in that case \(|I|+|J| - 2|I| = |J|-|I| = 1\). Therefore, there is no I such that \(2|I|< m < |I|+|J|\), and thus

$$\begin{aligned} G(2|J|-1,s,J,\mathbf {0},\mathbf {0}) = \sum _{j=0}^{|J|-1} \left( {\begin{array}{c}|J|\\ j\end{array}}\right) = 2^{|J|}-1 \ge G(2|J|-3,s,J,\mathbf {0},\mathbf {0}). \end{aligned}$$
(37)

(iii) If m is even with \(|J| \le m < 2|J|\), then \(G(m,s,J, \mathbf {0},\mathbf {0}) \le \sum _{j=0}^{|J|-1} \left( {\begin{array}{c}|J|\\ j\end{array}}\right) = 2^{|J|}-1\). \(\square \)

Proof (of Lemma 9)

We let \(j=|J|\) and write

$$ G(m,\boldsymbol{k}) = \sum _{I:|I| \le j-1} g_{j,|I|}(m-|\boldsymbol{k}|_{I^*}). $$

That is, \(G(m,\boldsymbol{k})=G(m,s,J,\boldsymbol{k},\mathbf {0})\), i.e., we drop the dependence on s, J and \(\boldsymbol{d}\).

First, we define \(\iota \) as the size of the largest (strict) subset I of J that contributes a non-zero value to \(G(m,\boldsymbol{k})\). That is,

$$\begin{aligned} \iota&:= \max \{|I| : I \subset J, m-|\boldsymbol{k}|_{I^*} \ge 2|I|+1\}. \end{aligned}$$

Note that \(0 \le \iota \le j-1\). Also, it is useful at this point to mention that our optimal solution \((\tilde{m},\mathbf {0})\) will be such that \(\tilde{m} = 2\iota +1\).

We then define \( {\mathcal G}_{m,\boldsymbol{k}} := \{I:0 \le |I| \le \iota \}. \) Using this notation we can write

$$\begin{aligned} G(m,\boldsymbol{k}) = \sum _{I: I \in {\mathcal G}_{m,\boldsymbol{k}}} g_{j,|I|}(m-|\boldsymbol{k}|_{I^*}). \end{aligned}$$
(38)

This holds because if \(I \notin {\mathcal G}_{m,\boldsymbol{k}}\), then \(m-|\boldsymbol{k}|_{I^*}\le 2|I|\) and thus \(g_{j,|I|}(m-|\boldsymbol{k}|_{I^*}) = 0\).

Next we introduce a definition:

Definition: For a given J and \(I \subset J\), we say that \((m,\boldsymbol{k})\) is dominated by  \((m',\mathbf {0})\)  at  I if \(g_{j,|I|}(m-|\boldsymbol{k}|_{I^*}) \le g_{j,|I|}(m')\).

Our strategy will be as follows: consider the set \({\mathcal M} = \{ 1,3,\ldots , 2\iota +1\}\). We claim that for each \(I \in {{\mathcal G}}_{m,\boldsymbol{k}}\), there exists \((m',\mathbf {0})\) with \(m' \in {\mathcal M}\) such that \((m,\boldsymbol{k})\) is dominated by \((m',\mathbf {0})\) at I. In turn, this will allow us to bound each term \(g_{j,|I|}(m-|\boldsymbol{k}|_{I^*})\) in (38) by a term of the form \(g_{j,|I|}(m')\). We then only need to keep track, for each \(m'\), of how many times \(g_{j,|I|}(m')\) has been used in this way—something we will do by introducing counting numbers denoted by \(\eta (\cdot )\). This strategy is a key intermediate step to get to our end result, which is to show that \(G(m,\boldsymbol{k}) \le G(\tilde{m},\mathbf {0})\).

To prove the existence of this \(m'\), we define a mapping \({\mathcal L}_{m,\boldsymbol{k}}: {{\mathcal G}}_{m,\boldsymbol{k}} \rightarrow {\mathcal M}\) that will, for a given m and \(\boldsymbol{k}\), assign to each subset \(I \in {{\mathcal G}}_{m,\boldsymbol{k}}\), the largest integer \(m' \in {\mathcal M}'\) such that \((m,\boldsymbol{k})\) is dominated by \((m',\mathbf {0})\) at I. The reason why we choose the largest \(m' \in {\mathcal M}\) is that this provides us with the tightest bound on \(g_{j,|I|}(m-|\boldsymbol{k}|_{I^*})\), as should be clear from the behavior of the function \(g_{j,i}(\ell )\), as described in Lemma 13.

The mapping \({\mathcal L}_{m,\boldsymbol{k}}(I)\) is defined as follows:

$$ {\mathcal L}_{m,\boldsymbol{k}}(I) := {\left\{ \begin{array}{ll} 2\iota +1 &{} \text{ if }\, m-|\boldsymbol{k}|_{I^*} \ge 2\iota +1\\ 2\ell +1 &{} \text{ if }\, 2\ell +1\le m-|\boldsymbol{k}|_{I^*} \le 2{\ell }+2{ where}0 \le \ell < \iota \\ 1 &{} \text{ if }\, m-|\boldsymbol{k}|_{I^*}\le 0. \end{array}\right. } $$

Claim: For each \(I \in \tilde{{\mathcal G}}_{m,\boldsymbol{k}}\), \(({\mathcal L}_{m,\boldsymbol{k}}(I),\mathbf {0})\) dominates \((m,\boldsymbol{k})\) at I.

Proof: We need to show that \(g_{j,|I|}(m-|\boldsymbol{k}|_{I^*}) \le g_{j,|I|}({\mathcal L}_{m,\boldsymbol{k}}(I))\). We proceed by examining the three possible cases for \({\mathcal L}_{m,\boldsymbol{k}}(I)\) based on its definition.

(i) Assume \(m-|\boldsymbol{k}|_{I^*} \ge 2\iota +1\) and therefore \({\mathcal L}_{m,\boldsymbol{k}}(I)=2\iota +1\). By definition of \(\iota \) we have \(2\iota +1 > 2i\), and therefore Part 2 of Lemma 13 applies, which implies that \(g_{j,|I|}(2\iota +1) \ge g_{j,|I|}(m-|\boldsymbol{k}|_{I^*})\). (ii) We now assume \(2\ell +1 < m-|\boldsymbol{k}|_{I^*} \le 2{\ell }+2\) for some \( 0\le \ell < \iota \). In this case, \({\mathcal L}_{m,\boldsymbol{k}}(I) = 2\ell +1\) and \(m-|\boldsymbol{k}|_{I^*}\) is either equal to \(2\ell +1\) or to \(2\ell +2\). If \(m-|\boldsymbol{k}|_{I^*} = 2\ell +1\) then clearly \(g_{j,|I|}({\mathcal L}_{m,\boldsymbol{k}}(I)) \ge g_{j,|I|}(m-|\boldsymbol{k}|_{I^*})\) since in fact these two quantities are equal. If \(m-|\boldsymbol{k}|_{I^*} = 2\ell +2\) then since \({\mathcal L}_{m,\boldsymbol{k}}(I) =2\ell +1 \ge 1\) is odd we can use Part 1 of Lemma 13 to conclude that \(g_{j,|I|}({\mathcal L}_{m,\boldsymbol{k}}(I) ) \ge g_{j,|I|}(m-|\boldsymbol{k}|_{I^*})\). (iii) If \(m-|\boldsymbol{k}|_{I^*} \le 0\), then \(m-|\boldsymbol{k}|_{I^*} \le 2|I|\) since \(I \in {{\mathcal G}}_{m,\boldsymbol{k}}\) implies \(0 \le |I| \le \iota \), and therefore \(g_{j,|I|}(m-|\boldsymbol{k}|_{I^*}) =0 \le g_{j,|I|}(1)\).\(\square \)

Now, recall that shortly after stating (38), when we explained our strategy to replace the terms \(g_{j,|I|}(m-|\boldsymbol{k}|_{I^*})\) by \(g_{j,|I|}(m')\) in (38), we also said we would need counting numbers \(\eta (\cdot )\) to tell us how many times, for each \(m'\), the term \(g_{j,|I|}(m')\) has been used in this way. These counting numbers are essential to apply the optimization result involving weighted sums that is given in Lemma 15, which is the key to get our final result. They are defined as follows, for \(0 \le i,\ell \le \iota \):

$$ \eta (\ell ,i,\boldsymbol{k}) := |\{I \in {{\mathcal G}}_{m,\boldsymbol{k}}: |I|=i, {\mathcal L}_{m,\boldsymbol{k}}(I)=2(\iota -\ell )+1\}|. $$

Note that \(\sum _{\ell =0}^{\iota } \eta (\ell ,i,\boldsymbol{k}) = \left( {\begin{array}{c}j\\ i\end{array}}\right) \). Also we can think of \(p(\ell ,i,\boldsymbol{k}) = \eta (\ell ,i,\boldsymbol{k})/\left( {\begin{array}{c}j\\ i\end{array}}\right) \) as the probability that a randomly chosen subset I of i elements from J is such that \(|\boldsymbol{k}|_{I^*} \in {\mathcal R}_{\ell }\), where

$$\begin{aligned} {\mathcal R}_{\ell } := {\left\{ \begin{array}{ll} \{0,1,\ldots ,m-(2\iota +1)\} &{} \text{ if } \ell =0\\ \{m-(2\iota +1)+2\ell -1,m-(2\iota +1)+2\ell )\} &{} \text{ if } 1 \le \ell < \iota \\ \{m-2,m-1,\ldots \} &{} \text{ if } \ell =\iota . \end{array}\right. } \end{aligned}$$
(39)

To get the final result, we write:

$$\begin{aligned} G(m,\boldsymbol{k})= & {} \sum _{I: 0 \le |I| \le \iota } g_{j,|I|}(m-|\boldsymbol{k}|_{I^*}) \nonumber \\\le & {} \sum _{i=0}^{\iota } \sum _{\ell =0}^{\iota } \eta (\ell ,i,\boldsymbol{k}) g_{j,i}(2(\iota -\ell )+1)\nonumber \\= & {} \sum _{i=0}^{\iota } \sum _{\ell =0}^{\iota } p(\ell ,i,\boldsymbol{k}) \left( {\begin{array}{c}j\\ i\end{array}}\right) g_{j,i}(2(\iota -\ell )+1)\nonumber \\= & {} \sum _{i=0}^{\iota } \sum _{\ell =0}^{\iota -i} p(\ell ,i,\boldsymbol{k}) \left( {\begin{array}{c}j\\ i\end{array}}\right) g_{j,i}(2(\iota -\ell )+1) \\\le & {} \sum _{i=0}^{\iota } \left( {\begin{array}{c}j\\ i\end{array}}\right) g_{j,i}(2\iota +1) = G(\tilde{m},\mathbf {0}),\nonumber \end{aligned}$$
(40)

where \(\tilde{m}=2\iota +1\). In the above, the first inequality is obtained by replacing \(g_{j,|I|}(m-|\boldsymbol{k}|_{I^*})\) by \(g_{j,i}(2(\iota -\ell )+1)\) for each of the \(\eta (\ell ,i,\boldsymbol{k})\) pairs \((m,\boldsymbol{k})\) dominated by \((2(\iota -\ell )+1,\mathbf {0})\) at I, where \(|I|=i\); the third equality holds because if \(\ell >\iota -i\), then \( 2(\iota -\ell )+1 < 2(\iota -(\iota -i))+1 = 2i+1 \) and therefore \(g_{j,i}(2(\iota -\ell )+1)=0\). Similarly, \(\ell \le \iota -i\) implies \(2(\iota -\ell )+1 \ge 2i+1\) and so \(g_{j,i}(2(\iota -\ell )+1)>0\) in this case. The last inequality comes from applying Lemma 15, whose conditions hold because:

  1. 1.

    \(\left( {\begin{array}{c}j\\ i\end{array}}\right) g_{j,i}(2(\iota -\ell )+1)\) corresponds to \(x_{\ell +1,i+1}\) in Lemma 15;

  2. 2.

    Lemma 14 together with (37) shows the decreasing row-sums condition is satisfied, i.e., \(G(2(\iota -\ell )+1,\mathbf {0}) = \sum _i x_{\ell +1,i+1}\) is decreasing with \(\ell \);

  3. 3.

    The increasing-within-column assumption of Lemma 15 is satisfied because the sum (40) only includes positive values of \(g_{j,i}(2(\iota -\ell )+1)\) (as shown above), which in turn allows us to invoke Part 2 of Lemma 13;

  4. 4.

    \(p(\ell ,i,\boldsymbol{k})\) corresponds to \(\alpha _{\ell +1,i+1}\) in Lemma 15;

  5. 5.

    To see that the \(p(\ell ,i,\boldsymbol{k})\)’s obey the decreasing-cumulative-sums condition (51) in Lemma 15, we argue that our probabilistic interpretation of the \(p(\ell ,i,\boldsymbol{k})\) based on the sets defined in (39) should make it clear that for \(i=0,\ldots ,\iota -1\) and \(0 \le r \le \iota \),

    $$ \sum _{\ell =0}^{r} p(\ell ,i,\boldsymbol{k}) \ge \sum _{\ell =0}^{r} p(\ell ,i+1,\boldsymbol{k}). $$

Therefore \( G(m,\boldsymbol{k}) \le G(\tilde{m},\mathbf {0}), \) as required. \(\square \)

1.1 Technical Lemmas

The following result [7, Lemma 2] is used to prove intermediate inequalities needed in our analysis.

Lemma 10

Let \(P_{m,n}(z)\) be the polynomial defined by

$$ P_{m,n}(z) = \sum _{j=0}^m \left( {\begin{array}{c}n\\ j\end{array}}\right) z^j, \qquad 0< m < n-1, $$

Then for \(z \ne -1\)

$$\begin{aligned} \frac{P_{m,n}(z) }{(1+z)^n \left( {\begin{array}{c}n\\ m\end{array}}\right) (n-m)} = \int \limits _{z/(z+1)}^1 u^{m} (1-u)^{n-m-1}du. \end{aligned}$$
(41)

We also need the following identity for integers \(c>a\ge 0\), which may be found in [5, (5.16)]

$$\begin{aligned} \sum _{j=0}^a \left( {\begin{array}{c}c\\ j\end{array}}\right) (-1)^j = \left( {\begin{array}{c}c-1\\ a\end{array}}\right) (-1)^a. \end{aligned}$$
(42)

We now state and prove a number of technical lemmas that were used within the above proofs. The first two lemmas are used to prove Lemma 5, and the next three are used for Lemmas 8 and 9.

Lemma 11

For \(b \ge s \ge 2\) and \({0 \le } k < s\), let

$$ Q(b,k,s) := \sum _{j=0}^{k} (-1)^j \left( {\begin{array}{c}s\\ j\end{array}}\right) (b^{k-j}-1). $$

Then

$$\begin{aligned} Q(b,k,s) \le {\left\{ \begin{array}{ll} b^k \left( \frac{b-1}{b}\right) ^s &{} \text{ if } \text{ k } \text{ is } \text{ even }\\ b^k \left( \frac{b-1}{b}\right) ^s +\left( {\begin{array}{c}s-1\\ k\end{array}}\right) &{} \text{ if } k>1 \text{ is } \text{ odd },\\ (b-1) &{} \text{ if } k=1. \end{array}\right. } \end{aligned}$$
(43)

Proof

The statement holds trivially for \(k=0\). For \(k>0\) we apply Lemma 10 and (42) to obtain

$$\begin{aligned} Q(b,k,s)&= b^k P_{k,s}(-1/b)- \left( {\begin{array}{c}s-1\\ k\end{array}}\right) (-1)^k\\&= b^k \left( \frac{b-1}{b}\right) ^s \int \limits _{-1/(b-1)}^1 u^k (1-u)^{s-k-1} c_{k,s} du - \left( {\begin{array}{c}s-1\\ k\end{array}}\right) (-1)^k \\&= b^k \left( \frac{b-1}{b}\right) ^s \left( \int \limits _{-1/(b-1)}^0 u^k (1-u)^{s-k-1} c_{k,s} du +1\right) - \left( {\begin{array}{c}s-1\\ k\end{array}}\right) (-1)^k \end{aligned}$$

where \(c_{k,s}\) is the constant that makes the integrand a beta pdf with parameters \((k+1,s-k)\), i.e., \(c_{k,s} = s \left( {\begin{array}{c}s-1\\ k\end{array}}\right) \).

Now, if k is odd then \(\int _{-1/(b-1)}^0 u^k (1-u)^{s-k-1} c_{k,s} du \le 0\) and thus we get

$$ Q(b,k,s) \le b^k \left( \frac{b-1}{b}\right) ^s +\left( {\begin{array}{c}s-1\\ k\end{array}}\right) . $$

Note also that when \(k=1\), \(Q(b,k,s) = b-1\) (which is not necessarily bounded from above by \(b^k \left( \frac{b-1}{b}\right) ^s +\left( {\begin{array}{c}s-1\\ k\end{array}}\right) = b ((b-1)/b)^s+s-1\). It is for this reason we treat the case \(k=1\) separately). If k is even, then

$$ \int \limits _{\frac{-1}{b-1}}^0 \!\!\! u^k (1-u)^{s-k-1} c_{k,s} du \le c_{k,s} \frac{1}{b-1} \frac{1}{(b-1)^k} \left( 1+ \frac{1}{b-1}\right) ^{s-k-1} =c_{k,s} \frac{b^{s-k-1}}{(b-1)^s}. $$

Therefore when k is even

$$\begin{aligned} Q(b,k,s)&= b^k \left( \frac{b-1}{b}\right) ^s \left( 1+c_{k,s}\frac{b^{s-k-1}}{(b-1)^s} \right) - \left( {\begin{array}{c}s-1\\ k\end{array}}\right) \\&= b^k \left( \frac{b-1}{b}\right) ^s + \left( \frac{s}{b} -1\right) \left( {\begin{array}{c}s-1\\ k\end{array}}\right) \le b^k \left( \frac{b-1}{b}\right) ^s \end{aligned}$$

since \(s\le b\). \(\square \)

Lemma 12

Consider a (0, ms)-net in base b. Let \(\emptyset \ne J \subset \{1,\ldots ,s\}\), and \(I \subseteq J\), with \(I^*=I \cup J^c\). Then the following bounds hold:

(i) if \(m \ge |\boldsymbol{k}|_{I^*} +\boldsymbol{d}_{J}+c|I|+|I^{*c}| \), then

$$ m_b(\boldsymbol{k},\boldsymbol{d},c,J,I;P_n)= b^{m-|\boldsymbol{k}|_{I^*} -|\boldsymbol{d}|_{J}-c|I|-|I^{*c}| } (b-1)^{|I^{*c}|}; $$

(ii) if \(|\boldsymbol{k}|_{I^*} + |\boldsymbol{d}|_{J} +c|I|+1< m < |\boldsymbol{k}|_{I^*} +\boldsymbol{d}_{J}+c|I|+|I^{*c}| \) then

$$\begin{aligned} m_b(\boldsymbol{k},\boldsymbol{d},c,J,I;P_n)\le & {} b^{m- |\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_{J}} \left( \frac{b-1}{b}\right) ^{|I^{*c}|} \\&\qquad + \left( {\begin{array}{c}|I^{*c}|-1\\ m- |\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_{J}-c|I|\end{array}}\right) {} \mathbf{1}_{m-|\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_{J}-c|I|} \end{aligned}$$

where

$$ \mathbf{1}_{x} = {\left\{ \begin{array}{ll} 1 &{} \text{ if } x \text{ is } \text{ odd } \\ 0 &{} \text{ otherwise }. \end{array}\right. } $$

(iii) if \(m = |\boldsymbol{k}|_{I^*} + |\boldsymbol{d}|_{J}+c|I| +1 \) then

$$ m_b(\boldsymbol{k},\boldsymbol{d},c,J,I;P_n) = (b-1). $$

(iv) if \(m \le |\boldsymbol{k}|_{I^*} + |\boldsymbol{d}|_{J}+c|I| \) then \(m_b(\boldsymbol{k},\boldsymbol{d},c,J,I;P_n)=0\).

Proof

Using the quantities \(n_b(\boldsymbol{k})\) defined in [10], their relation to \(m_b(\boldsymbol{k};P_n)\) and the value of the latter for a (0, ms)-net, we write

$$\begin{aligned} m_b(\boldsymbol{k},\boldsymbol{d},c,J,I;P_n)&= \sum _{i_j \ge k_j,j \in J^c; i_j \ge k_j+d_j+c, j \in I} \!\!\!\!\!\! n_b(\boldsymbol{ i}_{I^*}: \boldsymbol{d}_{I^{*c}}) \nonumber \\&= \sum _{\mathbf {e}\in \{0,1\}^{|I^{*c}|}} (-1)^{|\mathbf {e}|} m_b((\boldsymbol{k}_{J^c}:(\boldsymbol{k}+\boldsymbol{d}+2)_I:(\mathbf {d}+\mathbf {e})_{I^{*c}});P_n) \nonumber \\&= \sum _{j=0}^{|I^{*c}|} (-1)^j \left( {\begin{array}{c}|I^{*c}|\\ j\end{array}}\right) \max (b^{m-|\boldsymbol{k}|_{I^*}- |\boldsymbol{d}|_{J}-c|I|- j}-1,0) \end{aligned}$$
(44)

where \((\boldsymbol{ i}_I:\boldsymbol{d}_{I^c})\) represents the vector with jth component given by \(i_j\) if \(j \in I\) and by \(d_j\) if \(j \notin I\). If \(m-|\boldsymbol{k}|_{I^*}- |\boldsymbol{d}|_{J}-c|I|-|I^{*c}| \ge 0\) then the above sum is given by

$$\begin{aligned} m_b(\boldsymbol{k},\boldsymbol{d},c,J,I;P_n)&= b^{m- |\boldsymbol{k}|_{I^*} -|\boldsymbol{d}|_{J}-c|I|-|I^{*c}| }\sum _{j=0}^{|I^{*c}|} (-1)^j \left( {\begin{array}{c}|I^{*c}|\\ j\end{array}}\right) b^{|I^{*c}|-j}\\&= b^{m- |\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_{j}-c|I|-|I^{*c}| } (b-1)^{|I^{*c}|}. \end{aligned}$$

If \(m-|\boldsymbol{k}|_{I^*} - |\boldsymbol{d}|_{J}-c|I| \le 0\) then the \(\max \) inside the sum (44) always yields 0. When \( 1< m-|\boldsymbol{k}|_{I^*} - |\boldsymbol{d}|_{J}-c|I| < |I^{*c}|\), then (44) is given by

$$\begin{aligned}&\sum _{j=0}^{m-|\boldsymbol{k}|_{I^*}- |\boldsymbol{d}|_{J}-c|I|} (-1)^j \left( {\begin{array}{c}|I^{*c}|\\ j\end{array}}\right) (b^{m-|\boldsymbol{k}|_{I^*}- |\boldsymbol{d}|_{J}-c|I|- j}-1) \\&\le \left( b^{m- |\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_{J}}\left( \frac{b-1}{b}\right) ^{|I^{*c}|} + \left( {\begin{array}{c}|I^{*c}|-1\\ m- |\boldsymbol{k}|_{I^*}-|\boldsymbol{d}|_{J}-c|I|\end{array}}\right) {} \mathbf{1}_{m- |\boldsymbol{k}|_I-|\boldsymbol{d}|_{I^c}-c|I|}\right) , \end{aligned}$$

where the last inequality is obtained by applying Lemma 11 with \(s=|I^c|\) and \(k = m-|\boldsymbol{k}|_{I^*}- |\boldsymbol{d}|_{J}-c|I|\). Finally, when \(m=|\boldsymbol{k}|_{I^*}+|\boldsymbol{d}|_{J}+c|I|+1\), then (44) is given by

$$\begin{aligned}&\sum _{j=0}^{1} (-1)^j \left( {\begin{array}{c}|I^c|\\ j\end{array}}\right) (b^{1- j}-1) = b-1. \end{aligned}$$

\(\square \)

Lemma 13

The function \(g_{j,i}(\ell )\) defined in (16) with \(0 \le i < j\) and \(j \ge 1\) satisfies the following properties.

  1. 1.

    For a given i, if \(\ell \ge 1\) is odd then \(g_{j,i}(\ell ) \ge g_{j,i}(\ell +1)\).

  2. 2.

    For a given i, if \(\ell > 2i\) is odd then \(g_{j,i}(\ell ) \ge g_{j,i}(\ell +r)\) for all \(r \ge 0\).

Proof

For (1): if \(\ell \ge j+i\) then \(g_{j,i}(\ell ) = g_{j,i}(\ell +1) = 1\); if \(2i<\ell <j+i\), then \(g_{j,i}(\ell ) >1\) while \(g_{j,i}(\ell +1)=1\); if \(\ell \le 2i\) then \(g_{j,i}(\ell )=0\) and since \(\ell \) is odd, it means \(\ell \le 2i-1\), and thus \(\ell +1 \le 2i\), implying that \(g_{j,i}(\ell +1)=0\). For (2): if \(\ell \ge j+i\) then \(g_{j,i}(\ell +r)=1\) for all \(r\ge 0\); if \(2i+1<\ell <j+i\), then the function \(g_{j,i}(\ell )\) is increasing as \(\ell \) decreases over odd values strictly between \(j+i\) and \(2i+1\); this is because when \(\ell \) decreases by 2, \(h_{j,i}(\ell )\) increases by a factor of at least \(2b^2/((j-i-1)(j-i-2))\), which is at least 1 since \(j \le s \le b\). Finally, we need to show that \(g_{j,i}(2i+1) = (b/(b-1))^{j-i-1} \ge g_{j,i}(2i+3)\), i.e., that

$$ \left( \frac{b}{b-1}\right) ^{j-i-1} \ge 1+ \frac{b^{j+i-(2i+3)}}{(b-1)^{j-i}} \left( {\begin{array}{c}j-i-1\\ 2i+3-2i\end{array}}\right) = 1 + \left( \frac{b}{b-1}\right) ^{j-i} \frac{1}{b^3} \left( {\begin{array}{c}j-i-1\\ 3\end{array}}\right) . $$

Using the bound \(((b-1)/b)^j \le (b-j-1)/(b-1)\) shown in the proof of Lemma 14, we have that the above holds if

$$\begin{aligned} \frac{b-1}{b}&\ge \frac{(j-i-1)(j-i-2)(j-i-3)}{6b^3}+ \frac{b-(j-i)-1}{b-1} \\ \Leftrightarrow \frac{(b-1)^2-b(b-j+i-1)}{b(b-1)}&\ge \frac{(j-i-1)(j-i-2)(j-i-3)}{6b^3} \\ \Leftrightarrow 6b^2(1+b(j-i-1))&\ge (b-1)(j-i-1)(j-i-2)(j-i-3), \end{aligned}$$

which is clearly true since \(j\le s \le b\) and \(i \ge 0\) and therefore \(6b^3(j-i-1) \ge (b-1)(j-i-1)(j-i-2)(j-i-3)\). \(\square \)

Lemma 14

Let \(s \ge 3\) and \(b \ge s\). Let m be odd with \(1 \le m \le 2s-3 \), and consider the function

$$\begin{aligned} G(m,s)= & {} \sum _{j=0}^{0.5(m-3)} \left( {\begin{array}{c}s\\ j\end{array}}\right) \\&\qquad + \sum _{j=\max (0,m-s+1)}^{0.5(m-3)} \left( {\begin{array}{c}s\\ j\end{array}}\right) h_{s,j}(m) + \left( {\begin{array}{c}s\\ 0.5(m-1)\end{array}}\right) \left( \frac{b}{b-1}\right) ^{s-0.5(m-1)-1}, \end{aligned}$$

where \(h_{s,j}(m)\) is as defined in (15), i.e.,

$$ h_{s,j}(m) = \left( {\begin{array}{c}s-j-1\\ m-2j\end{array}}\right) \frac{b^{s+j-m}}{(b-1)^{s-m}}. $$

Then \(G(m,s) \ge G(m-2,s)\) for \(m \ge 3\) odd. That is, G(ms) is decreasing over the odd integers from \(2s-3\) down to 3.

Proof

First, we compute

$$\begin{aligned}&\left( \frac{b}{b-1}\right) ^{s-0.5(m-1)-1} - h_{s,0.5(m-1)}(m)\\&= \left( \frac{b}{b-1}\right) ^{s-0.5(m-1)-1} - \left( {\begin{array}{c}s-0.5(m-1)-1\\ 1\end{array}}\right) \frac{b^{0.5(m-1)+s-m}}{(b-1)^{s-0.5(m-1)}} \\&= \left( \frac{b}{b-1}\right) ^{s-0.5(m-1)} \frac{b-1-(s-0.5(m-1)-1)}{b}\\&=\left( \frac{b}{b-1}\right) ^{s-0.5(m-1)} \frac{b-(s-0.5(m-1))}{b}. \end{aligned}$$

Using this, we can write

$$\begin{aligned} G(m,s)&= \sum _{j=0}^{0.5(m-3)} \left( {\begin{array}{c}s\\ j\end{array}}\right) + \sum _{j=\max (0,m-s+1)}^{0.5(m-1)} \left( {\begin{array}{c}s\\ j\end{array}}\right) h_{s,j}(m) \nonumber \\&+ \left( {\begin{array}{c}s\\ 0.5(m-1)\end{array}}\right) \left( \frac{b}{b-1}\right) ^{s-0.5(m-1)} \frac{0.5(m-1)+b-s}{b}. \end{aligned}$$
(45)

Next, we show that for \(2 \le j \le 0.5(m-1)\):

$$\begin{aligned} \left( {\begin{array}{c}s\\ j\end{array}}\right) h_{s,j}(m) \ge \left( {\begin{array}{c}s\\ j-2\end{array}}\right) h_{s,j-2}(m-2). \end{aligned}$$
(46)
$$\begin{aligned}&\left( {\begin{array}{c}s\\ j\end{array}}\right) h_{s,j}(m) -\left( {\begin{array}{c}s\\ j-2\end{array}}\right) h_{s,j-2}(m-2)\\&= \left( {\begin{array}{c}s\\ j\end{array}}\right) \left( {\begin{array}{c}s-j-1\\ m-2j\end{array}}\right) \frac{b^{s+j-m}}{(b-1)^{s-j}} - \left( {\begin{array}{c}s\\ j-2\end{array}}\right) \left( {\begin{array}{c}s-(j-2)-1\\ m-2-2(j-2)\end{array}}\right) \frac{b^{s+j-2-(m-2)}}{(b-1)^{s-(j-2)}} \\&=\left( {\begin{array}{c}s\\ j\end{array}}\right) \left( {\begin{array}{c}s-j-1\\ m-2j\end{array}}\right) \frac{b^{s+j-m}}{(b-1)^{s-j}}-\left( {\begin{array}{c}s\\ j-2\end{array}}\right) \left( {\begin{array}{c}s-j+1\\ m-2j+2\end{array}}\right) \frac{b^{s+j-m}}{(b-1)^{s-j+2}} \\&=\left( {\begin{array}{c}s\\ j\end{array}}\right) \left( {\begin{array}{c}s-j-1\\ m-2j\end{array}}\right) \frac{b^{s+j-m}}{(b-1)^{s-j}} \\&\qquad \cdot \left( 1-\frac{j(j-1)}{(s-j+2)(s-j+1)} \frac{(s-j+1)(s-j)}{(m-2j+2)(m-2j+1)} \frac{1}{(b-1)^2}\right) . \end{aligned}$$

Hence to prove (46), we need to show that

$$ 1 \ge \frac{j(j-1)}{(s-j+2)(s-j+1)} \frac{(s-j+1)(s-j)}{(m-2j+2)(m-2j+1)} \frac{1}{(b-1)^2}, $$

which holds because

$$\begin{aligned}&\frac{j(j-1)}{(s-j+2)(s-j+1)} \frac{(s-j+1)(s-j)}{(m-2j+2)(m-2j+1)} \frac{1}{(b-1)^2} \le \frac{j(j-1)}{6} \frac{1}{(b-1)^2}\\&\le \frac{(s-2)(s-3)}{6(b-1)^2} \le \frac{1}{6} \le 1, \end{aligned}$$

since \(j \le 0.5(m-1) \le s-2\) and \(b \ge s\).

Using (45), \(G(m,s) \ge G(m-2,s)\) can be shown to hold if

$$\begin{aligned}&\sum _{j=0}^{0.5(m-3)} \left( {\begin{array}{c}s\\ j\end{array}}\right) + \sum _{j=\max (0,m-s+1)}^{0.5(m-1)} \left( {\begin{array}{c}s\\ j\end{array}}\right) h_{s,j}(m) \nonumber \\&+ \left( {\begin{array}{c}s\\ 0.5(m-1)\end{array}}\right) \left( \frac{b}{b-1}\right) ^{s-0.5(m-1)} \frac{0.5(m-1)+b-s}{b} \nonumber \\ \ge&\sum _{j=0}^{0.5(m-5)} \left( {\begin{array}{c}s\\ j\end{array}}\right) + \sum _{j=\max (0,m-2-s+1)}^{0.5(m-3)} \left( {\begin{array}{c}s\\ j\end{array}}\right) h_{s,j}(m-2)\nonumber \\&+ \left( {\begin{array}{c}s\\ 0.5(m-3)\end{array}}\right) \left( \frac{b}{b-1}\right) ^{s-0.5(m-3)} \frac{0.5(m-3)+b-s}{b}. \end{aligned}$$
(47)

In turn, using (46), we know that:

$$ \sum _{j=\max (0,m-s+1)}^{0.5(m-1)} \left( {\begin{array}{c}s\\ j\end{array}}\right) h_{s,j}(m) \ge \sum _{j=\max (0,m-2-s+1)}^{0.5(m-5)} \left( {\begin{array}{c}s\\ j\end{array}}\right) h_{s,j}(m-2) $$

and therefore to show (47) it is sufficient to show that

$$\begin{aligned}&\left( {\begin{array}{c}s\\ 0.5(m-3)\end{array}}\right) + \left( {\begin{array}{c}s\\ 0.5(m-1)\end{array}}\right) \left( \frac{b}{b-1}\right) ^{s-0.5(m-1)} \frac{0.5(m-1)+b-s}{b} \nonumber \\ \ge&\left( {\begin{array}{c}s\\ 0.5(m-3)\end{array}}\right) h_{s,0.5(m-3)}(m-2) \nonumber \\ +&\left( {\begin{array}{c}s\\ 0.5(m-3)\end{array}}\right) \left( \frac{b}{b-1}\right) ^{s-0.5(m-3)} \frac{0.5(m-3)+b-s}{b}, \end{aligned}$$
(48)

where

$$\begin{aligned} h_{s,0.5(m-3)}(m-2)&= \left( {\begin{array}{c}s-0.5(m-3)-1\\ m-2-(m-3)\end{array}}\right) \frac{b^{s+0.5(m-3)-(m-2)}}{(b-1)^{s-0.5(m-3)}}\\&=(s-0.5(m-1) )\left( \frac{b}{b-1} \right) ^{s-0.5(m-1)} \frac{1}{b-1}. \end{aligned}$$

The following inequality will be helpful in this proof:

Claim 1

For \(b \ge 2\) and \(j \ge 1\), we have that

$$\begin{aligned} \left( \frac{b}{b-1}\right) ^j \le \frac{b-1}{b-(j+1)}. \end{aligned}$$
(49)

Proof

The inequality is equivalent to having

$$ (b-1)^{j+1} \ge b^j (b-(j+1)). $$

Applying the mean value theorem to \(f(x) = x^{j+1}\) and noticing \(f'(x)\) is monotone increasing for \(x \ge 0\) , we get that \(f(b)-f(b-1) = f'(\xi ) \le f'(b)\) for some \(\xi \in (b-1,b)\) and thus \((b-1)^{j+1} \ge b^{j+1} - (j+1)b^j\). \(\square \)

Going back to our goal of proving (48), it is sufficient to show that

$$\begin{aligned}&1+\left( \frac{b}{b-1}\right) ^{s-0.5(m-1)} \frac{s-0.5(m-3)}{0.5(m-1)} \frac{0.5(m-1)+b-s}{b}\nonumber \\&\ge \left( \frac{b}{b-1}\right) ^{s-0.5(m-1)} \left( \frac{s-0.5(m-1)}{b-1} + \frac{b}{b-1}\frac{0.5(m-3)+b-s)}{b} \right) \nonumber \\ \Leftrightarrow&\left( \frac{b-1}{b}\right) ^{s-0.5(m-1)} +\frac{s-0.5(m-3)}{0.5(m-1)} \frac{0.5(m-1)+b-s}{b}\nonumber \\&\ge \left( \frac{s-0.5(m-1)}{b-1} + \frac{0.5(m-3)+b-s}{b-1} \right) . \end{aligned}$$
(50)

Using (49) to simplify the LHS of (50), we see that (50) holds if

$$\begin{aligned}&\frac{b-(s-0.5(m-1))-1}{b-1}+\frac{s-0.5(m-3)}{0.5(m-1)} \frac{0.5(m-1)+b-s}{b} \\&\ge \left( \frac{s-0.5(m-1)}{b-1} + \frac{0.5(m-3)+b-s}{b-1} \right) \\ \Leftrightarrow&\frac{s-0.5(m-3)}{0.5(m-1)} \frac{0.5(m-1)+b-s}{b} \ge \frac{s-0.5(m-1)}{b-1}\\ \Leftrightarrow&\frac{b-1}{b} \ge \frac{s-0.5(m-1)}{s-0.5(m-3)} \frac{0.5(m-1)}{0.5(m-1)+b-s}, \end{aligned}$$

which holds because \(s-0.5(m-3) \le s \le b\) and \(\frac{b-1}{b} \ge \frac{b-1-x}{b-x}\) if \(x \ge 0\).

Definition 3

We denote by \({\mathcal A}_w\) the set of \(w \times \ell \) weight matrices A with entries \(\alpha _{i,j} \ge 0\) that satisfy the following two conditions: 1) \(\sum _{i=1}^w \alpha _{i,j} = 1\) for all \(j=1,\ldots ,\ell \); 2) the weights \(\alpha _{i,j}\) obey a decreasing-cumulative-sums condition as follows: for \(1 \le i\le w,1 \le j \le \ell \), let

$$\begin{aligned} A_{i,j} = \sum _{k=1}^i \alpha _{k,j}. \end{aligned}$$
(51)

Then \(A_{i,j} \ge A_{i,j+1}\) for each \(i=1,\ldots ,w\) and \(j=1,\ldots ,\ell -1\) (when \(i=w\) we have \(A_{w,j}=1\) for all j). This means the weights on the first row are decreasing from left to right; the partial sums of the two first rows are decreasing from left to right, etc.

Lemma 15

Let X be a \(w \times \ell \) matrix with \(\ell \ge w\) and entries \(x_{i,j} \ge 0\) and of the form

that is \(x_{i,j}>0\) if and only if \(i +j \le \ell +1\), for \(1 \le i \le w, 1 \le j \le \ell \). We assume X satisfies the following two conditions: first,

$$ \sum _{j=1}^{\ell } x_{1,j} \ge \sum _{j=1}^{\ell -1} x_{2,j} \ge \cdots \ge \sum _{j=1}^{\ell -w+1} x_{w,j} $$

(we refer to this as the decreasing-row-sums condition) and second,

$$\begin{aligned} x_{1,j} \le x_{2,j} \le \cdots \le x_{\min (w,\ell -i+1),j}, \qquad j=1,\ldots ,\ell \end{aligned}$$
(52)

(we refer to this as the increasing-within-column condition).

Let A be a weight matrix in \({\mathcal A}_w\) and let

$$ \Vert A \circ X \Vert _1 = \sum _{j=1}^{\ell } \alpha _{1,j} x_{1,j} + \sum _{j=1}^{\ell -1} \alpha _{2,j} x_{2,j} + \cdots + \sum _{j=1}^{\ell -w+1} \alpha _{w,j} x_{w,j}. $$

Then for any \(A \in {\mathcal A}_w\)

$$\begin{aligned} \Vert A \circ X \Vert _1 \le \sum _{j=1}^{\ell } x_{1,j}. \end{aligned}$$
(53)

That is, the weight matrix \(A \in {\mathcal A}_w\) that maximize the LHS of (53) is the one with 1’s on the first row and 0’s elsewhere.

Proof

First, note that \(A \in {\mathcal A}_w\) implies that cumulative sums from the last row up are increasing, i.e., for \(R_{i,j} = \sum _{k=i}^w \alpha _{k,j}\), we have \(R_{i,j} \le R_{i,j+1}\) for \(j=1,\ldots ,\ell -1\).

We proceed by induction on \(w \ge 2\).

If \(w=2\), then it suffices to show that for \(A \in {\mathcal A}_2\), we have that

$$ \sum _{j=1}^{\ell }\alpha _{1,j} x_{1,j} + \sum _{j=1}^{\ell -1} (1-\alpha _{1,j})x_{2,j} \le \sum _{i=1}^{\ell } x_{1,j} \Leftrightarrow \sum _{j=1}^{\ell -1} (1-\alpha _{1,j})x_{2,j} \le \sum _{j=1}^{\ell } (1-\alpha _{1,j}) x_{1,j}, $$

or, equivalently, that

$$ \sum _{j=1}^{\ell -1} (1-\alpha _{1,j}) (x_{2,j}-x_{1,j}) \le (1-\alpha _{1,\ell }) x_{1,\ell }. $$

Now, we know that

$$ \sum _{j=1}^{\ell } x_{1,j} \ge \sum _{j=1}^{\ell -1} x_{2,j} \Leftrightarrow \sum _{j=1}^{\ell -1} (x_{2,j}-x_{1,j}) \le x_{1,\ell } $$

with \(x_{2,j} - x_{1,j} \ge 0\). Therefore

$$ \sum _{j=1}^{\ell -1} (1-\alpha _{1,j})(x_{2,j}-x_{1,j}) \le (1-\alpha _{1,\ell }) \sum _{j=1}^{\ell -1} (x_{2,j}-x_{1,j}) \le (1-\alpha _{1,\ell }) x_{1,\ell }, $$

where the first inequality holds because the \(\alpha _{1,j}\)’s are decreasing.

Now assume the statement holds for \(w-1\ge 2\). First we create a new weight matrix \(\tilde{A}\) by merging the two last rows into the second-to-last one and setting the last one to zero, i.e., we define \(\tilde{\alpha }_{w-1,j}\) as

$$\begin{aligned} \tilde{\alpha }_{w-1,j}&= \alpha _{w-1,j}+\alpha _{w,j} \qquad j=1,\ldots ,\ell \\ \tilde{\alpha }_{w,j}&=0 \qquad j=1,\ldots ,\ell \\ \tilde{\alpha }_{i,j}&= \alpha _{i,j}, i=1,\ldots ,w-2,j=1,\ldots ,\ell . \end{aligned}$$

With this change, we claim that \(\tilde{A} \in {\mathcal A}_w\). Indeed:

  1. 1.

    \(\tilde{\alpha }_{i,j} \ge 0\)

  2. 2.

    \(\sum _{i=1}^w \tilde{\alpha }_{i,j} = \sum _{i=1}^{w-2} \alpha _{i,j} + (\alpha _{w-1,j}+\alpha _{w,j}) + 0 = 1\).

  3. 3.

    \(\tilde{A}_{i,j} = A_{i,j}\) for \(i=1,\ldots ,w-2\) and \(\tilde{A}_{w-1,j} = A_{w,j} = 1\) for \(j=1,\ldots ,\ell \).

Next, we show that

$$\begin{aligned} \Vert \tilde{A} \circ X \Vert _1 \ge \Vert A \circ X\Vert _1. \end{aligned}$$
(54)

Since \(\alpha _{i,j} = \tilde{\alpha }_{i,j}\) for \(i< w-1\), then (54) holds if and only if

$$\begin{aligned}&\sum _{j=1}^{\ell -w+2} (\alpha _{w-1,j}+\alpha _{w,j}) x_{w-1,j} \ge \sum _{j=1}^{\ell -w+2} \alpha _{w-1,j} x_{w-1,j} + \sum _{j=1}^{\ell -w+1} \alpha _{w,j} x_{w,j} \\ \Leftrightarrow&\sum _{j=1}^{\ell -w+1} \alpha _{w,j} x_{w-1,j} + \alpha _{w,\ell -w+2} x_{w-1,\ell -w+2} \ge \sum _{j=1}^{\ell -w+1} \alpha _{w,j} x_{w,j} \\ \Leftrightarrow&\sum _{j=1}^{\ell -w+1} \alpha _{w,j} (x_{w,j}-x_{w-1,j}) \le \alpha _{w,\ell -w+2} x_{w-1,\ell -w+2} . \end{aligned}$$

By the decreasing-row-sum assumption on the \(x_{i,j}\)’s we know that

$$ 0 \le \sum _{j=1}^{\ell -w+1} (x_{w,j}-x_{w-1,j}) \le x_{w-1,\ell -w+2} $$

and by assumption that \(A\in {\mathcal A}_w\) we have that \(\alpha _{w,1} \le \alpha _{w,2}\le \cdots \le \alpha _{w,\ell }\). Therefore

$$\begin{aligned}&\sum _{j=1}^{\ell -w+1} \alpha _{w,j} (x_{w,j}-x_{w-1,j}) \le \alpha _{w,\ell -w+1} \sum _{j=1}^{\ell -w+1} (x_{w,j}-x_{w-1,j}) \\&\le \alpha _{w,\ell -w+1} x_{w-1,\ell -w+2} \le \alpha _{w,\ell -w+2} x_{w-1,\ell -w+2}, \end{aligned}$$

as required to show that (54) holds.

Next, to use the induction hypothesis, we observe that \(\tilde{\alpha }_{w,j} = 0\) implies we can essentially ignore the \(x_{w,j}\)’s. More formally, let \(\tilde{A}_{w-1}\) be the matrix formed by the first \(w-1\) rows of \(\tilde{A}\) and similarly for \(X_{w-1}\). Then \(\tilde{A}_{w-1} \in {\mathcal A}_{w-1}\), since

  1. 1.

    \(\tilde{\alpha }_{i,j} \ge 0\) for \(i=1,\ldots ,w-1\), \(j=1,\ldots ,\ell \)

  2. 2.

    \(\sum _{i=1}^{w-1} \tilde{\alpha }_{i,j} = \sum _{i=1}^w \alpha _{i,j} = 1\) for \(j=1,\ldots , \ell \)

  3. 3.

    \(\tilde{A}_{i,j} \ge \tilde{A}_{i,j+1}\) as verified earlier (and note that \(\tilde{\alpha }_{w-1,1} \le \cdots \le \tilde{\alpha }_{w-1,\ell }\) by assumption that \(A \in {\mathcal A}_w\) and since \(\tilde{\alpha }_{w-1,j} = R_{w-1,j}\).)

By applying the induction hypothesis, we obtain

$$ \Vert \tilde{A}_{w-1} \circ X_{w-1} \Vert _1 \le \sum _{j=1}^{\ell } x_{1,j} $$

and since \( \Vert A \circ X \Vert _1 \le \Vert \tilde{A} \circ X \Vert _1 = \Vert \tilde{A}_{w-1} \circ X_{w-1} \Vert , \) this proves the result. \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lemieux, C., Wiart, J. (2022). On the Distribution of Scrambled \((0,m,s)-\)Nets Over Unanchored Boxes. In: Keller, A. (eds) Monte Carlo and Quasi-Monte Carlo Methods. MCQMC 2020. Springer Proceedings in Mathematics & Statistics, vol 387. Springer, Cham. https://doi.org/10.1007/978-3-030-98319-2_5

Download citation

Publish with us

Policies and ethics