1 Correction to: Ann Inst Stat Math https://doi.org/10.1007/s10463-018-0674-9

There is a gap at the end of the proof of Theorem 1, since there the application of the conditional McDiarmid inequality yields

$$\begin{aligned} J_n - {\mathbf E}\{J_n|X_1, \dots , X_n\} \rightarrow 0 \quad a.s., \end{aligned}$$

where \(J_n=\int \left| \sum _{i=1}^n W_{n,i}(x) \cdot (Y_i-m(X_i)) \right| \mu ({\hbox {d}}x)\), and not yet the assertion

$$\begin{aligned} J_n \rightarrow 0 \quad a.s. \end{aligned}$$

in the last step of the proof of Theorem 1.

This gap can be filled by adding into assumption (A3) the second condition

$$\begin{aligned} \sum _{i=1}^n \int |W_{n,i}(x)|^2 \mu ({\hbox {d}}x) \rightarrow 0 \quad a.s. \end{aligned}$$
(29)

Using this condition together with \(|Y|\le L\) a.s., it is easy to see that one has

$$\begin{aligned} {\mathbf E}\{J_n|X_1,\dots ,X_n\} \rightarrow 0 \quad a.s., \end{aligned}$$

which is still needed to obtain the assertion.

In order to verify (29) in the applications of Theorem 1, for kernel estimation in the context of Lemma 6 one notices that, up to some constant factor, the left-hand side of (29) is majorized by

$$\begin{aligned} \int \frac{1}{1+\sum _{i=1}^n I_{S_{r_1}} \left( \frac{x-X_i}{h_n} \right) } \mu ({\hbox {d}}x), \end{aligned}$$

which can be treated similarly to the verification of (A4) in Lemma 6. The verification of (29) for partitioning estimation in the context of Lemma 9 is analogous.

2 Details

Last part of the proof of Theorem 1. It remains to show

$$\begin{aligned} J_n \cdot I_{B_n} \rightarrow 0 \quad a.s. \end{aligned}$$

Application of the conditional McDiarmid inequality as in the proof of Theorem 1 yields

$$\begin{aligned} J_n \cdot I_{B_n} - {\mathbf E}\{ J_n \cdot I_{B_n} | X_1, \dots , X_n \} \rightarrow 0 \quad a.s. \end{aligned}$$

Hence, it suffices to show

$$\begin{aligned} {\mathbf E}\{ J_n | X_1, \dots , X_n \} \rightarrow 0 \quad a.s. \end{aligned}$$
(30)

By the inequality of Jensen, the independence of the data and \(|Y| \le L\)a.s., we get

$$\begin{aligned}&\left( {\mathbf E}\{ J_n | X_1, \dots , X_n \} \right) ^2\\&\quad \le {\mathbf E}\{ J_n^2 | X_1, \dots , X_n \}\\&\quad \le {\mathbf E}\left\{ \int \left| \sum _{i=1}^n W_{n,i}(x) \cdot (Y_i-m(X_i)) \right| ^2 \mu ({\hbox {d}}x) \bigg | X_1, \dots , X_n \right\} \\&\quad = {\mathbf E}\left\{ \left| \sum _{i=1}^n W_{n,i}(X) \cdot (Y_i-m(X_i)) \right| ^2 \bigg | X_1, \dots , X_n \right\} \\&\quad = {\mathbf E}\left\{ {\mathbf E}\left\{ \left| \sum _{i=1}^n W_{n,i}(X) \cdot (Y_i-m(X_i)) \right| ^2 \bigg | X, X_1, \dots , X_n \right\} \bigg | X_1, \dots , X_n \right\} \\&\quad = {\mathbf E}\left\{ \sum _{i=1}^n W_{n,i}(X)^2 \cdot {\mathbf E}\left\{ (Y_i-m(X_i))^2 \bigg | X, X_1, \dots , X_n \right\} \bigg | X_1, \dots , X_n \right\} \\&\quad \le 4 L^2 \cdot {\mathbf E}\left\{ \sum _{i=1}^n W_{n,i}(X)^2 \bigg | X_1, \dots , X_n \right\} \\&\quad = 4 L^2 \cdot \sum _{i=1}^n \int |W_{n,i}(x)|^2 \mu ({\hbox {d}}x). \end{aligned}$$

Thus, (30) follows from (29).

Proof of (29) in the context of Lemma 6. On the one hand, we have

$$\begin{aligned} \sum _{i=1}^n W_{n,i}(x)^2 = \frac{\sum _{i=1}^n K \left( \frac{x-X_i}{h_n} \right) ^2 }{ \left( \sum _{j=1}^n K \left( \frac{x-X_j}{h_n} \right) \right) ^2 } \le 1. \end{aligned}$$

On the other hand, it holds

$$\begin{aligned} \sum _{i=1}^n W_{n,i}(x)^2\le & {} c_2 \cdot \frac{\sum _{i=1}^n K \left( \frac{x-X_i}{h_n} \right) }{ \left( \sum _{j=1}^n K \left( \frac{x-X_j}{h_n} \right) \right) ^2 } \cdot I_{\left\{ \sum _{j=1}^n K \left( \frac{x-X_j}{h_n} \right) >0\right\} }\\\le & {} c_2 \cdot \frac{1}{ \sum _{j=1}^n K \left( \frac{x-X_j}{h_n} \right) }. \end{aligned}$$

Consequently,

$$\begin{aligned} \sum _{i=1}^n W_{n,i}(x)^2\le & {} \min \left\{ 1, c_2 \cdot \frac{1}{ \sum _{j=1}^n K \left( \frac{x-X_j}{h_n} \right) } \right\} \\\le & {} \min \left\{ 1, \frac{c_2}{c_1} \cdot \frac{1}{ \sum _{j=1}^n I_{S_{r_1}} \left( \frac{x-X_j}{h_n} \right) } \right\} \\\le & {} \max \left\{ 1, \frac{c_2}{c_1} \right\} \cdot \min \left\{ 1, \frac{1}{ \sum _{j=1}^n I_{S_{r_1}} \left( \frac{x-X_j}{h_n} \right) } \right\} \\\le & {} \max \left\{ 1, \frac{c_2}{c_1} \right\} \cdot \frac{2}{ 1+\sum _{j=1}^n I_{S_{r_1}} \left( \frac{x-X_j}{h_n} \right) }. \end{aligned}$$

Hence, it suffices to show

$$\begin{aligned} W_n:= \int \frac{1}{ 1+\sum _{j=1}^n I_{S_{r_1}} \left( \frac{x-X_j}{h_n} \right) } \mu ({\hbox {d}}x) \rightarrow 0 \quad a.s. \end{aligned}$$
(31)

For any bounded sphere S around 0, by Lemma 2a and by assumption (9), we get

$$\begin{aligned}&{\mathbf E}\left\{ \int _S \frac{1}{ 1+\sum _{j=1}^n I_{S_{r_1}} \left( \frac{x-X_j}{h_n} \right) } \mu ({\hbox {d}}x) \right\} \\&\quad = \int _S {\mathbf E}\left\{ \frac{1}{ 1+\sum _{j=1}^n I_{S_{r_1}} \left( \frac{x-X_j}{h_n} \right) } \right\} \mu ({\hbox {d}}x)\\&\quad \le \int _S \frac{1}{ n \cdot \mu (x+h_n \cdot S_{r_1}) } \mu ({\hbox {d}}x)\\&\quad \le \frac{const}{n \cdot h_n^d} \rightarrow 0 \quad (n \rightarrow \infty ), \end{aligned}$$

where the last inequality holds because of equation (5.1) in Györfi et al. (2002).

Thus, it suffices to show

$$\begin{aligned} W_n - {\mathbf E}\{W_n\} \rightarrow 0 \quad a.s. \end{aligned}$$
(32)

Analogously to the proof of (A4), with \(X_1^\prime \), \(X_1\), ..., \(X_n\) independent and identically distributed and

$$\begin{aligned} W_n^\prime := \int \frac{1}{ 1+ I_{S_{r_1}} \left( \frac{x-X_1^\prime }{h_n} \right) + \sum _{j=2}^n I_{S_{r_1}} \left( \frac{x-X_j}{h_n} \right) } \mu ({\hbox {d}}x), \end{aligned}$$

by Lemma 4.2 in Kohler et al. (2003), one has

$$\begin{aligned} {\mathbf E}\{|W_n-{\mathbf E}\{W_n\}|^4\} \le c_{11} \cdot n^2 \cdot {\mathbf E}\{ (W_n-W_n^\prime )^4\} \quad (n \in \mathbb {N}). \end{aligned}$$

Furthermore, by the second part of Lemma 5 one gets

$$\begin{aligned}&{\mathbf E}\{|W_n-W_n^\prime |^4\}\\&\quad \le 16 \cdot {\mathbf E}\left\{ \left( \int \frac{ I_{S_{r_1}} \left( \frac{x-X_1}{h_n} \right) }{ \left( 1 + \sum _{j=2}^n I_{S_{r_1}} \left( \frac{x-X_j}{h_n} \right) \right) ^2 } \mu ({\hbox {d}}x) \right) ^4 \right\} \\&\quad \le 16 \cdot {\mathbf E}\left\{ \left( \int \frac{ I_{S_{r_1}} \left( \frac{x-X_1}{h_n} \right) }{ 1 + \sum _{j=2}^n I_{S_{r_1}} \left( \frac{x-X_j}{h_n} \right) } \mu ({\hbox {d}}x) \right) ^4 \right\} \\&\quad \le \frac{const}{n^4}. \end{aligned}$$

From these relations, one obtains (32) by the Borel–Cantelli lemma and the Markov inequality.

Proof of (29) in the context of Lemma 9. Analogously to above it suffices to show

$$\begin{aligned} V_n := \int \frac{1}{ 1+\sum _{j=1}^n I_{A_{\mathcal{P}_n}(x)} \left( X_j \right) } \mu ({\hbox {d}}x) \rightarrow 0 \quad a.s. \end{aligned}$$

For any bounded sphere S around zero, by assumption (12) we get

$$\begin{aligned} \int _S \frac{1}{n \cdot \mu (A_{\mathcal{P}_n}(x))} \mu ({\hbox {d}}x) \rightarrow 0 \quad (n \rightarrow \infty ), \end{aligned}$$

from which by Lemma 2a we can conclude analogously to above

$$\begin{aligned} {\mathbf E}V_n \rightarrow 0 \quad (n \rightarrow \infty ). \end{aligned}$$

Hence, it suffices to show

$$\begin{aligned} V_n - {\mathbf E}\{V_n\} \rightarrow 0 \quad a.s., \end{aligned}$$

which follows analogously to above from the second part of Lemma 7.