Multivariate Density Estimation Using a Multivariate Weighted Log-Normal Kernel

Abstract

This paper suggests a multivariate asymmetric kernel density estimation using a multivariate weighted log-normal (LN) kernel for non-negative multivariate data. Asymptotic properties of the multivariate weighted LN kernel density estimator are studied. Simulation studies are also conducted in the bivariate situation.

This is a preview of subscription content, access via your institution.

References

  1. Bouezmarni, T. and Rombouts, J. V. K. (2010). Nonparametric density estimation for multivariate bounded data. Journal of Statistical Planning and Inference 140, 139–152.

    MathSciNet  Article  MATH  Google Scholar 

  2. Chen, S. X. (1999). Beta kernel estimators for density functions. Computational Statistics and Data Analysis 31, 131–145.

    MathSciNet  Article  MATH  Google Scholar 

  3. Chen, S. X. (2000). Probability density function estimation using gamma kernels. Ann. Inst. Stat. Math. 52, 471–480.

    MathSciNet  Article  MATH  Google Scholar 

  4. Hirukawa, M. (2010). Nonparametric multiplicative bias correction for kernel-type density estimation on the unit interval. Computational Statistics and Data Analysis 54, 473–495.

    MathSciNet  Article  MATH  Google Scholar 

  5. Igarashi, G. (2016). Weighted log-normal kernel density estimation. Communications in Statistics - Theory and Methods 45, 6670–6687.

  6. Igarashi, G. and Kakizawa, Y. (2014). Re-formulation of inverse Gaussian, reciprocal inverse Gaussian, and Birnbaum–Saunders kernel estimators. Statistics and Probability Letters 84, 235–246.

    MathSciNet  Article  MATH  Google Scholar 

  7. Jin, X. and Kawczak, J. (2003). Birnbaum–Saunders and lognormal kernel estimators for modelling durations in high frequency financial data. Annals of Economics and Finance 4, 103–124.

    Google Scholar 

  8. Jones, M. C. (1993). Simple boundary correction for kernel density estimation. Stat. Comput. 3, 135–146.

    Article  Google Scholar 

  9. Koul, H. L. and Song, W. (2013). Large sample results for varying kernel regression estimates. Journal of Nonparametric Statistics 25, 829–853.

    MathSciNet  Article  MATH  Google Scholar 

  10. Marchant, C., Bertin, K., Leiva, V. and Saulo, H. (2013). Generalized Birnbaum–Saunders kernel density estimators and an analysis of financial data. Computational Statistics and Data Analysis 63, 1–15.

    MathSciNet  Article  Google Scholar 

  11. Marron, J. S. and Ruppert, D. (1994). Transformations to reduce boundary bias in kernel density estimation. J. R. Stat. Soc. Ser. B 56, 653–671.

    MathSciNet  MATH  Google Scholar 

  12. Mnatsakanov, R. and Sarkisian, K. (2012). Varying kernel density estimation on R +. Statistics and Probability Letters 82, 1337–1345.

    MathSciNet  Article  MATH  Google Scholar 

  13. Patil, G. P. and Rao, C. R. (1978). Weighted distributions and size-biased sampling with applications to wildlife populations and human families. Biometrics 34, 179–184.

    MathSciNet  Article  MATH  Google Scholar 

  14. Rao, C. R. (1965). On discrete distributions arising out of methods of ascertainment. Sankhyā, Series A 27, 311–324.

    MathSciNet  MATH  Google Scholar 

  15. Rosenblatt, M. (1956). Remarks on some nonparametric estimates of a density function. The Annals of Mathematical Statistics 27, 832–837.

    MathSciNet  Article  MATH  Google Scholar 

  16. Saulo, H., Leiva, V., Ziegelmann, F. A. and Marchant, C. (2013). A nonparametric method for estimating asymmetric densities based on skewed Birnbaum–Saunders distributions applied to environmental data. Stoch. Env. Res. Risk A. 27, 1479–1491.

    Article  Google Scholar 

  17. Scaillet, O. (2004). Density estimation using inverse and reciprocal inverse Gaussian kernels. Journal of Nonparametric Statistics 16, 217–226.

    MathSciNet  Article  MATH  Google Scholar 

  18. Silverman, B. W. (1986). Density estimation for statistics and data analysis. Chapman & Hall, London.

    Google Scholar 

  19. Wand, M. P. and Jones, M. C. (1995). Kernel smoothing. Chapman & Hall, London.

    Google Scholar 

Download references

Acknowledgments

The author thanks Professor Yoshihide Kakizawa for his advice.

Funding

This work was partially supported by the Japan Society for the Promotion of Science (JSPS); Grant-in-Aid for Research Activity Start-up [Grant Number 15H06068].

Author information

Affiliations

Authors

Corresponding author

Correspondence to Gaku Igarashi.

Appendix: Proofs of Theorems 1–3

Appendix: Proofs of Theorems 1–3

We denote by Y = (Y1,…, Yd) and W = (W1,…, Wd) random vectors that are distributed according to the densities \(K_{\mathbf {\mu }_{\mathbf {x}},{\Sigma }_{\mathbf {x}},\mathbf {\nu }}\) and \(K_{\mathbf {\mu }_{\mathbf {x}},\frac {1}{2}{\Sigma }_{\mathbf {x}},2\mathbf {\nu }-\mathbf {\iota }}\), respectively. Also, in this appendix, we use the univariate LN density

$$K_{\mu,\sigma^{2}}^{(\text{LN})}(s) = \frac{s^{-1}}{\sqrt{2 \pi \sigma^{2}}} \exp \left\{ - \frac{1}{2\sigma^{2}} (\log s - \mu )^{2} \right\}. $$

To prove Theorems 1–3, we prepare the following lemmas.

Lemma A.1.

For anyxj, xk ∈ [0, ),\(\nu _j \in \mathbb {R}\),and j, k = 1,…, d(jk),we have

$$\begin{array}{@{}rcl@{}} E[Y_{j} \,-\, x_{j}]\!\! &=& \!\!\!\left\{\begin{array}{lll} \!b_{j} \left( \nu_{j} + \frac{3}{2} \right) + O({b_{j}^{2}}(x_{j}+b_{j})^{-1}), \qquad\qquad\qquad \frac{x_{j}}{b_{j}} \to \infty,\\ \!b_{j} \left\{ (\kappa_{j} + 1) \left( 1 + \frac{1}{\kappa_{j} + 1} \right)^{\nu_{j}+ 1/2} - \kappa_{j} \right\} + o(b_{j}), \quad \frac{x_{j}}{b_{j}} \to \kappa_{j}, \end{array}\right. \\ E[(Y_{j} \,-\, x_{j})^{2}] \!\!&=\!\!& \!\!\!\left\{\begin{array}{lll} \!b_{j}x_{j} + O({b_{j}^{2}}),\qquad\frac{x_{j}}{b_{j}} \to \infty, \\ \!O({b_{j}^{2}}), \qquad\qquad\quad \frac{x_{j}}{b_{j}} \to \kappa_{j}, \end{array}\right.\\ E[(Y_{j} \,-\, x_{j})^{4}] \!\!&=\!\!& \!\!\!\left\{\begin{array}{lll} \!O(\{b_{j}(x_{j}+b_{j})\}^{2}), \qquad\frac{x_{j}}{b_{j}} \to \infty,\\ \!O({b_{j}^{4}}), \qquad\qquad\qquad\quad \frac{x_{j}}{b_{j}} \to \kappa, \end{array}\right.\\ E[(Y_{j} \,-\, x_{j})(Y_{k} \,-\, x_{k})] \!\!\!&=\!\!& \!\!\!\left\{\begin{array}{lll} \!\rho (b_{j} b_{k} x_{j} x_{k} )^{1/2} \,+\, O(({b_{j}^{3}} b_{k} x_{j}^{-1} x_{k} )^{1/2} \,+\, (b_{j} {b_{k}^{3}} x_{j} x_{k}^{-1} )^{1/2} ), \frac{x_{j}}{b_{j}} \!\to\! \infty, ~~ \frac{x_{k}}{b_{k}} \!\to\! \infty,\\ \!O(b_{j}^{1/2} b_{k} x_{j}^{1/2} ), \qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\! \frac{x_{j}}{b_{j}} \!\to\! \infty, \frac{x_{k}}{b_{k}} \!\to\! \kappa_{k},\\ \!O(b_{j} b_{k}^{1/2} x_{k}^{1/2} ), \qquad\qquad\qquad\qquad\qquad\qquad\quad\quad\!~~~\frac{x_{j}}{b_{j}} \!\to\! \kappa_{j},\frac{x_{k}}{b_{k}} \!\to\! \infty,\\ \!O(b_{j}b_{k}), \frac{x_{j}}{b_{j}} \to \kappa_{j}, \qquad\qquad\qquad\qquad\qquad\qquad\quad~~\! \frac{x_{k}}{b_{k}} \!\to\! \kappa_{k}, \end{array}\right. \\ E[ (W_{j} - x_{j} )^{2} ] &=& \left\{\begin{array}{lll} O(b_{j}(x_{j}+b_{j})), \frac{x_{j}}{b_{j}} \to \infty,\\ O({b_{j}^{2}}), \qquad\quad~~ \frac{x_{j}}{b_{j}} \to \kappa_{j}, \end{array}\right. \end{array} $$

where κj, κk ≥ 0 are constants. Especially, for νj = ± 1/2,E[Yjxj] = bj(νj + 3/2).

Proof.

Use

$$\begin{array}{@{}rcl@{}} E[{Y_{j}^{q}}] &=& \exp \left\{ q \log(x_{j} + b_{j}) + \left( \nu_{j} + \frac{q}{2} \right) q \log \left( 1 + \frac{b_{j}}{x_{j}+b_{j}} \right) \right\}, \quad q \in \mathbb{R}, \\ E[Y_{j}Y_{k}] &=& E[Y_{j}] E[Y_{k}] \exp \left[ \rho \left\{ \log \left( 1 + \frac{b_{j}}{x_{j}+b_{j}} \right) \log \left( 1 + \frac{b_{k}}{x_{k}+b_{k}} \right) \right\}^{1/2} \right], \\ E[{W_{j}^{q}}] &=& \exp \left[ q \log(x_{j} + b_{j}) + \left( \nu_{j} - \frac{1}{2} + \frac{q}{4} \right) q \log \left( 1 + \frac{b_{j}}{x_{j}+b_{j}} \right) \right.\\ &&\left. - \frac{q}{2} \underset{k \ne j}{\sum\limits_{~}^{*}} \rho \left\{ \log \left( \!1\! +\! \frac{b_{j}}{x_{j}\,+\,b_{j}} \right) \log \left( \!1\! +\! \frac{b_{k}}{x_{k}\,+\,b_{k}} \right) \right\}^{1/2} \right], \quad q \in \mathbb{R}. \end{array} $$

Lemma A.2.

For any ρ ∈ (− 1/(d − 1), 1) and \(\mathbf {\nu } = (\nu _1,\ldots ,\nu _d)^{\prime } \in \mathbb {R}^d\) , we have

$$K_{\mathbf{\mu}_{\mathbf{x}},{\Sigma}_{\mathbf{x}},\mathbf{\nu}}(\mathbf{s}) \le \frac{ d^{d/2} }{ h_{d}^{1/2}(\rho) } \prod\limits_{j = 1}^{d} K_{\widetilde{\mu}_{j}(x_{j}),d \sigma_{jj}(x_{j})}^{(LN)}(s_{j}). $$

Proof.

Let \(\mathbf {t} = (\{ \log s_1 - \widetilde {\mu }_1(x_1) \}/\sigma _{11}^{1/2}, \ldots , \{ \log s_d - \widetilde {\mu }_d(x_d) \}/\sigma _{dd}^{1/2} )^{\prime }\) . Noting that the eigenvalues of R are 1 − ρ (multiplicity d − 1) and 1 + (d − 1)ρ, we have

$$\begin{array}{@{}rcl@{}} (\log \mathbf{s} - \widetilde{\mathbf{\mu}}_{\mathbf{x}} )^{\prime} {\Sigma}_{\mathbf{x}}^{-1} (\log \mathbf{s} - \widetilde{\mathbf{\mu}}_{\mathbf{x}} ) = \mathbf{t}^{\prime} R^{-1} \mathbf{t} \ge \frac{ \mathbf{t}^{\prime}\mathbf{t} }{ v_{d,\rho} } \ge \frac{ \mathbf{t}^{\prime}\mathbf{t} }{ d }, \end{array} $$

where

$$v_{d,\rho} = \left\{\begin{array}{ll} 1-\rho, \qquad\qquad -1/(d-1) < \rho < 0,\\ 1+(d-1)\rho, 0 \le \rho < 1. \end{array}\right. $$

It follows that

$$\begin{array}{@{}rcl@{}} K_{\mathbf{\mu}_{\mathbf{x}},{\Sigma}_{\mathbf{x}},\mathbf{\nu}}(\mathbf{s}) &\le& \frac{ 1 }{ h_{d}^{1/2}(\rho) } \prod\limits_{j = 1}^{d} \frac{ s_{j}^{-1} }{ \{ 2 \pi \sigma_{jj}(x_{j}) \}^{1/2} } \exp \left[ - \frac{ \{ \log s_{j} - \widetilde{\mu}_{j}(x_{j}) \}^{2} }{ 2 d \sigma_{jj}(x_{j}) } \right]\\ &=& \frac{ d^{d/2} }{ h_{d}^{1/2}(\rho) } \prod\limits_{j = 1}^{d} K_{\widetilde{\mu}_{j}(x_{j}),d \sigma_{jj}(x_{j})}^{(\text{LN})}(s_{j}). \end{array} $$

Lemma A.3.

For any \(\mathbf {\nu } \in \mathbb {R}^d\), there exists a constant Ld, ρ,ν > 0, independent of b, such that

$$\sup_{\mathbf{x} \in [0,\infty)^{d}} \sup_{\mathbf{s} \in [0,\infty)^{d}} K_{\mathbf{\mu}_{\mathbf{x}},{\Sigma}_{\mathbf{x}},\mathbf{\nu}}(\mathbf{s}) \le L_{d,\rho,\mathbf{\nu}} \prod\limits_{j = 1}^{d} b_{j}^{-1}. $$

Proof.

Using Lemma A.2, it suffices to bound \(K_{\widetilde {\mu }_j(x_j),d \sigma _{jj}(x_j)}^{(LN)}(s_j)\) , as in Lemma 4 of Igarashi (2016). The detail is omitted.□

Lemma 4.

For any τ ∈ (0, 1),q > 0, and j = 1,…, d, we have

$${\int}_{b^{-\tau}}^{\infty} K_{\widetilde{\mu}_{j}(x_{j}), d \sigma_{jj}(x_{j})}^{(LN)}(s_{j}) d x_{j} \le (b^{\tau}s_{j})^{q + 1} (1+b^{\tau}b_{j})^{\{ \nu_{j} - d(q + 2) \}^{2}/(2d)}, \quad s_{j} \ge 0. $$

Proof.

The proof is similar to Lemma 5 of Igarashi (2016). The detail is omitted.

Proof.

of Theorem 1 We have

$$\begin{array}{@{}rcl@{}} E[ \hat{f}_{\mathbf{b},\rho,\mathbf{\nu}}(\mathbf{x}) ] \!\!\!&=&\!\!\! f(\mathbf{x}) \,+\, \sum\limits_{j = 1}^{d} f_{j}(\mathbf{x}) E[Y_{j}\,-\,x_{j}] \,+\, \frac{1}{2} \sum\limits_{j = 1}^{d} \sum\limits_{k = 1}^{d} f_{jk}(\mathbf{x}) E[(Y_{j}\,-\,x_{j})(Y_{k}\,-\,x_{k})] \\ && \hspace{0em} \!\!+ \sum\limits_{j = 1}^{d} \sum\limits_{k = 1}^{d} {\int}_{[0,\infty)^{d}} (s_{j}-x_{j}) (s_{k}-x_{k}) K_{\mathbf{\mu}_{\mathbf{x}},{\Sigma}_{\mathbf{x}},\mathbf{\nu}}(\mathbf{s}) {{\int}_{0}^{1}} \{ f_{jk}(\mathbf{x} \\&&\!\!+ \theta(\mathbf{s}-\mathbf{x})) - f_{jk}(\mathbf{x}) \} (1-\theta) d\theta d\mathbf{s}, \end{array} $$

where the absolute value of the remainder term is bounded by

$$\begin{array}{@{}rcl@{}} &&\frac{L}{2} E \left[ \{(\mathbf{Y} - \mathbf{x})^{\prime}(\mathbf{Y} - \mathbf{x})\}^{\eta/2} \sum\limits_{j = 1}^{d} \sum\limits_{k = 1}^{d} |Y_{j}-x_{j}||Y_{k}-x_{k}| \right] \\ &&\le \frac{L}{4} E \left[ \{(\mathbf{Y} - \mathbf{x})^{\prime}(\mathbf{Y} - \mathbf{x})\}^{\eta/2} \sum\limits_{j = 1}^{d} \sum\limits_{k = 1}^{d} \{(Y_{j}-x_{j})^{2}+(Y_{k}-x_{k})^{2}\} \right]\\&& \le \frac{Ld^{\eta/2 + 1}}{2} \sum\limits_{j = 1}^{d} \{ E[ (Y_{j}-x_{j})^{4} ] \}^{(\eta+ 2)/4}. \end{array} $$

Proof.

of Theorem 2 We can see that

$$\begin{array}{@{}rcl@{}} V[ \hat{f}_{\mathbf{b},\rho,\mathbf{\nu}}(\mathbf{x}) ]\!\!\!\!\! &=& \!\!n^{-1} {\int}_{[0,\infty)^{d}} \{ K_{\mathbf{\mu}_{\mathbf{x}},{\Sigma}_{\mathbf{x}},\mathbf{\nu}}(\mathbf{s}) \}^{2} f(\mathbf{s}) d\mathbf{s} + O(n^{-1})\\ &=& \!\!n^{-1} \frac{ \exp \{ - \mathbf{\iota}^{\prime} (\mathbf{\mu}_{\mathbf{x}} \,+\, {\Sigma}_{\mathbf{x}} \mathbf{\nu} ) \,+\, \frac{1}{4} \mathbf{\iota}^{\prime} {\Sigma}_{\mathbf{x}} \mathbf{\iota} \} }{ 2^{d} \pi^{d/2} |{\Sigma}_{\mathbf{x}}|^{1/2} }\! {\int}_{[0,\infty)^{d}} K_{\mathbf{\mu}_{\mathbf{x}},\frac{1}{2}{\Sigma}_{\mathbf{x}},2\mathbf{\nu}-\mathbf{\iota}}(\mathbf{s}) f(\mathbf{s}) d\mathbf{s} \,+\, O(n^{-1}) \\ &=& \!\!\frac{ n^{-1} }{ 2^{d} \pi^{d/2} h_{d}^{1/2}(\rho) } \biggl[ \prod\limits_{j = 1}^{d} \frac{ \bigl(1 + \frac{b_{j}}{x_{j}+b_{j}} \bigr)^{-\nu_{j}+ 1/4} \exp \{ \frac{1}{4} \underset{k \ne j}{{\sum}_{}^{*}} \sigma_{jk}(x_{j},x_{k}) \} }{ \sigma_{jj}^{1/2}(x_{j}) (x_{j}+b_{j}) } \biggr] \\ && \!\!\!\hspace{0em} \times \!\biggl\{ \!f(\mathbf{x}) \,+\, \sum\limits_{j = 1}^{d} {\int}_{[0,\infty)^{d}} (s_{j}\,-\,x_{j}) K_{\mathbf{\mu}_{\mathbf{x}},\frac{1}{2}{\Sigma}_{\mathbf{x}},2\mathbf{\nu}-\mathbf{\iota}}(\mathbf{s}) {{\int}_{0}^{1}} f_{j}(\mathbf{x}\,+\,\theta(\mathbf{s}\,-\,\mathbf{x})) d\theta d\mathbf{s} {}\biggr\} \,+\, O(n^{-1}), \end{array} $$

where the absolute value of the remainder term in the braces is bounded by

$$\begin{array}{@{}rcl@{}} \sum\limits_{j = 1}^{d} C_{j} E[ |W_{j}-x_{j}| ] \le \sum\limits_{j = 1}^{d} C_{j} \{ E[ (W_{j}-x_{j})^{2} ] \}^{1/2}. \end{array} $$

Also, we have

$$\begin{array}{@{}rcl@{}} && \frac{ \bigl(1 + \frac{b_{j}}{x_{j}+b_{j}} \bigr)^{-\nu_{j}+ 1/4} \exp \{ \frac{1}{4} \underset{k \ne j}{{\sum}_{}^{*}} \sigma_{jk}(x_{j},x_{k}) \} }{ \sigma_{jj}^{1/2}(x_{j}) (x_{j}+b_{j}) } \\ &&=\! \left\{\begin{array}{lll} \!\! \frac{ b_{j}^{-1/2} }{ (x_{j}+b_{j})^{1/2} } \{ 1 \! +\! O(b_{j} (x_{j}\! +\! b_{j})^{-1} ) \} \underset{k \ne j}{{\prod}_{}^{*}} U_{j,k} \{ 1 \! +\! B_{5,j,k}(x_{j},x_{k}) \}, \frac{x_{j}}{b_{j}} \to \infty,\\ \!\! \frac{ b_{j}^{-1} \bigl(1 + \frac{1}{\kappa_{j}+ 1} \bigr)^{-\nu_{j}+ 1/4} }{ \{ \log \bigl(1 + \frac{1}{\kappa_{j}+ 1} \bigr) \}^{1/2} (\kappa_{j}+ 1) } \{ 1 \! +\! o(1) \} \underset{k \ne j}{{\prod}_{}^{*}} U_{j,k} \{ 1 \! +\! B_{5,j,k}(x_{j},x_{k}) \}, \frac{x_{j}}{b_{j}} \to \kappa_{j}. \end{array}\right. \end{array} $$

The results follow from Lemma A.1.□

Proof.

of Theorem 3 Let \(S_b = [b^{\tau _1},b^{-\tau _2}]^d\) for τ1 ∈ (2/3,1) and τ2 ∈ (max{2/(q + 2 − d), d/{2(q + 2 − d)}}, min{1/(2d), η/(η + d + 2)}), where η and q are given in the assumptions A1 and A3, respectively. Then,

$$\text{MISE}[ \hat{f}_{\mathbf{b},\rho,\mathbf{\nu}} ] = \left( {\int}_{S_{b}} + {\int}_{[0,\infty)^{d} \backslash S_{b}} \right) \bigl[ \{ \text{Bias}[\hat{f}_{\mathbf{b},\rho,\mathbf{\nu}}(\mathbf{x})] \}^{2} + V[\hat{f}_{\mathbf{b},\rho,\mathbf{\nu}}(\mathbf{x})] \bigr] d\mathbf{x}. $$

Inview of Theorems 1 and 2, it is shown that

$$\begin{array}{@{}rcl@{}} &&\biggl| {\int}_{S_{b}} V[\hat{f}_{\mathbf{b},\rho,\mathbf{\nu}}(\mathbf{x})] d\mathbf{x} - n^{-1} b^{-d/2} {\int}_{[0,\infty)^{d}} \sigma^{2}(\mathbf{x}) d\mathbf{x} \biggr|\\ &&\le o(n^{-1} b^{-d/2} ) + n^{-1} b^{-d/2} {\int}_{[0,\infty)^{d} \backslash S_{b}} \sigma^{2}(\mathbf{x}) d\mathbf{x} = o(n^{-1} b^{-d/2} ), \end{array} $$

and that

$${\int}_{S_{b}} \mathcal{B}^{2}(\mathbf{x}) d\mathbf{x} = {\int}_{S_{b}} \left[ \sum\limits_{j = 1}^{d} \left\{ B_{1,j}(x_{j}) + \underset{k \ne j}{\sum\limits_{}^{*}} B_{2,j,k}(x_{j},x_{k}) + B_{3,j}(x_{j}) \right\} \right]^{2} d\mathbf{x} = o(b^{2} ), $$

where\(\mathcal {B}(\mathbf {x}) = \text {Bias}[\hat {f}_{\mathbf {b},\rho ,\mathbf {\nu }}(\mathbf {x})] - b \gamma (\mathbf {x})\) for xSb.Hence, we can see that

$$\begin{array}{@{}rcl@{}} &&\!\!\!\!\!\! \biggl| {\int}_{S_{b}} \{ \text{Bias}[\hat{f}_{\mathbf{b},\rho,\mathbf{\nu}}(\mathbf{x})] \}^{2} d\mathbf{x} - b^{2} {\int}_{[0,\infty)^{d}} \gamma^{2}(\mathbf{x}) d\mathbf{x} \biggr| \\ &&\!\!\!\!\!\!= \biggl| {\int}_{S_{b}} \mathcal{B}(\mathbf{x}) \{ 2 b \gamma(\mathbf{x}) + \mathcal{B}(\mathbf{x}) \} d\mathbf{x} - b^{2} {\int}_{[0,\infty)^{d} \backslash S_{b}} \gamma^{2}(\mathbf{x}) d\mathbf{x} \biggr| \\ &&\!\!\!\!\!\!\!\le 2 b \biggl\{ \!{\int}_{S_{b}} \gamma^{2}(\mathbf{x}) d\mathbf{x} \!{\int}_{S_{b}} \mathcal{B}^{2}(\mathbf{x}) d\mathbf{x} \!\biggr\}^{{}\!\!1/2} + \!{\int}_{S_{b}} \mathcal{B}^{2}(\mathbf{x}) d\mathbf{x} \,+\, b^{2} \!{\int}_{[0,\infty)^{d} \backslash S_{b}} \gamma^{2}(\mathbf{x}) d\mathbf{x} \,=\, o(b^{2}). \end{array} $$

It remains to evaluate \({\int }_{[0,\infty )^d \backslash S_b} \bigl [\{\text {Bias}[\hat {f}_{\mathbf {b},\rho ,\mathbf {\nu }}(\mathbf {x})]\}^{2} + V[\hat {f}_{\mathbf {b},\rho ,\mathbf {\nu }}(\mathbf {x})] \bigr ] d\mathbf {x}\).Let \(\mathcal {X}_l = [0,b^{\tau _1})^{d_l}\),\(\mathcal {X}_m = [b^{\tau _1},b^{-\tau _2}]^{d_m}\), and \(\mathcal {X}_u = [b^{-\tau _2},\infty )^{d_u}\), where dl + dm + du = d.In what follows, for simplicity, we consider the case \(\mathbf {x}_{(l)} = (x_1,\ldots ,x_{d_l})^{\prime }, \mathbf {x}_{(m)} = (x_{d_l + 1},\ldots ,x_{d_l+d_m})^{\prime }, \mathbf {x}_{(u)} = (x_{d_l+d_m + 1},\ldots ,x_{d})^{\prime }\) only, since we can deal with other patterns consisting of any permutation of the d indices. Ifdl ≥ 1 and du = 0, then we have

$$\begin{array}{@{}rcl@{}} && {\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} \{ \text{Bias}[\hat{f}_{\mathbf{b},\rho,\mathbf{\nu}}(\mathbf{x})] \}^{2} d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ &&\le {\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} \left[ \sum\limits_{j = 1}^{d_{l}} \left\{ f_{j}(\mathbf{x}) |E[Y_{j}-x_{j}]| + \frac{1}{2} \sum\limits_{k = 1}^{d_{l}} C_{jk} E[|Y_{j}-x_{j}||Y_{k}-x_{k}|]\right.\right.\\ &&\left.\left.+ \sum\limits_{k = d_{l}+ 1}^{d} C_{jk} E[|Y_{j}-x_{j}||Y_{k}-x_{k}|] \right\} \right.\\ && \hspace{5em} + \sum\limits_{j = d_{l}+ 1}^{d} \left| b_{j} \gamma_{1,j}(\mathbf{x}) + \underset{k \ne j}{\sum\limits_{}^{*}} (b_{j}b_{k})^{1/2} \gamma_{2,j,k}(\mathbf{x}) + B_{1,j}(x_{j})\right.\\&&\left.\left. \hspace{5em} + \underset{k \ne j}{\sum\limits_{}^{*}} B_{2,j,k}(x_{j},x_{k}) + B_{3,j}(x_{j}) \right| \right]^{2} d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ && = O(b^{(2+d_{l})\tau_{1}} + b^{(4+d_{l})\tau_{1} - d_{m} \tau_{2}} + b^{(2+d_{l})\tau_{1} + 1 - (d_{m}+ 1) \tau_{2}} ) + o(b^{2}) = o(b^{2}) \end{array} $$

(\(\underset {k \ne j}{{\sum }_{}^{*}}\)is the summation over k = dl + 1,…, d such that kj),and

$$\begin{array}{@{}rcl@{}} &&\!\!\!\!\!\!\!\!\! {\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} V[\hat{f}_{\mathbf{b},\rho,\mathbf{\nu}}(\mathbf{x})] d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ &&\!\!\!\!\!\!\!\!\! \le n^{-1} {\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} {\int}_{[0,\infty)^{d}} \{\! K_{\mathbf{\mu}_{\mathbf{x}},{\Sigma}_{\mathbf{x}},\mathbf{\nu}}(\mathbf{s})\! \}^{2} \biggl\{ \!f(\mathbf{x}) \,+\, \sum\limits_{j = 1}^{d} (s_{j}\,-\,x_{j}) {{\int}_{0}^{1}} \!f_{j}(\mathbf{x}\,+\,\theta(\mathbf{s}\,-\,\mathbf{x})\!) d\theta \!\biggr\} d\mathbf{s} d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ &&\!\!\!\!\!\!\!\!\! \le{} \frac{ n^{-1} {}\exp \{ \frac{1}{4} d(d\!-\!1) \rho \log 2 \} {\prod}_{j = 1}^{d} {}c_{\nu_{j}} }{ 2^{d/2} \pi^{d/2} h_{d}^{1/2}(\rho) \bigl(\prod\limits_{j = 1}^{d} b_{j}^{1/2} \bigr) } \!{\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} \!\left( \prod\limits_{j = 1}^{d} x_{j}^{-1/2} \right) {\kern-1.5pt}\left\{ {}f(\mathbf{x}) \,+\, \sum\limits_{j = 1}^{d} {}C_{j} E[|W_{j}\,-\,x_{j}|]\! \right\} d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ &&\!\!\!\!\!\!\!\!\!= o(n^{-1}b^{-d/2}) \!+ O(n^{-1} b^{-d/2 + d_{l} \tau_{1}/2 - d_{m} \tau_{2}/2} (b^{\tau_{1}} + b^{(1 - \tau_{2})/2 } ) ) = o(n^{-1}b^{-d/2}), \end{array} $$

where

$$c_{\nu} = \left\{\begin{array}{ll} 2^{-\nu+ 1/4}, \nu < \frac{1}{4},\\ 1, \qquad\quad \nu \ge \frac{1}{4}, \end{array}\right. $$

since,in addition to Lemma A.1, we have

$$\begin{array}{@{}rcl@{}} E[Y_{j} - x_{j}] &=& O(b^{\tau_{1}}) \quad \text{ for } x_{j} \le b_{j}^{\tau_{1}}, \\ E[(Y_{j}-x_{j})^{2}] &=& O(b^{2\tau_{1}}) \quad \text{for } x_{j} \le b_{j}^{\tau_{1}}, \\ E[|Y_{j}\,-\,x_{j}||Y_{k}\,-\,x_{k}|] &=& \left\{\begin{array}{ll}O(b^{2\tau_{1}}) \qquad\quad~~ \text{for } x_{j},x_{k} \le b_{j}^{\tau_{1}},\\ O(b^{\tau_{1}+(1-\tau_{2})/2}) ~ \text{for } x_{j} \le b_{j}^{\tau_{1}} \text{ and } x_{k} \in [b_{j}^{\tau_{1}},b_{j}^{-\tau_{2}}],\end{array}\right. \\ E[ (W_{j} - x_{j} )^{2} ] &=& O(b^{2\tau_{1}}) \quad \text{for } x_{j} \le b_{j}^{\tau_{1}}, \end{array} $$

and t/2 ≤ log(1 + t) ≤ log 2fort ∈ [0,1]. Also,if du ≥ 1,then we can see that

$$\begin{array}{@{}rcl@{}} &&\!\!\!\!\!\!\! {\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} {\int}_{\mathcal{X}_{u}} \{ \text{Bias}[\hat{f}_{\mathbf{b},\rho,\mathbf{\nu}}(\mathbf{x})] \}^{2} d\mathbf{x}_{(u)} d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ &&\!\!\!\!\!\!\! = {\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} {\int}_{\mathcal{X}_{u}} \biggl[ {\int}_{[0,\infty)^{d}} K_{\mathbf{\mu}_{\mathbf{x}},{\Sigma}_{\mathbf{x}},\mathbf{\nu}}(\mathbf{s}) \{ f(\mathbf{s}) - f(\mathbf{x}) \} d\mathbf{s} \biggr]^{2} d\mathbf{x}_{(u)} d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ &&\!\!\!\!\!\!\! \le {\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} {\int}_{\mathcal{X}_{u}} {\int}_{[0,\infty)^{d}} K_{\mathbf{\mu}_{\mathbf{x}},{\Sigma}_{\mathbf{x}},\mathbf{\nu}}(\mathbf{s}) \{ f(\mathbf{s}) - f(\mathbf{x}) \}^{2} d\mathbf{s} d\mathbf{x}_{(u)} d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ &&\!\!\!\!\!\!\! \le \frac{ 2 C d^{d/2} }{ h_{d}^{1/2}(\rho) } {\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} {\int}_{[0,\infty)^{d}} \left\{ \prod\limits_{j = 1}^{d_{l}+d_{m}} K_{\widetilde{\mu}_{j}(x_{j}), d \sigma_{jj}(x_{j})}^{(LN)}(s_{j}) \right\} \\ &&\!\!\!\!\!\!\! \hspace{11em} \times \left\{ \!b^{d_{u} \tau_{2}(q + 1)} \prod\limits_{j = d_{l}+d_{m}+ 1}^{d} s_{j}^{q + 1} (1+o(1)) \right\} f(\mathbf{s}) d\mathbf{s} d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ &&\!\!\!\!\!\!\! \hspace{1em} + 2 C b^{d_{u} \tau_{2}(q + 1)} {\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} {\int}_{\mathcal{X}_{u}} \left( \prod\limits_{j = d_{l}+d_{m}+ 1}^{d} x_{j}^{q + 1} \right) f(\mathbf{x}) d\mathbf{x}_{(u)} d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ && = O(b^{d_{l} \tau_{1} - d_{m} \tau_{2} + d_{u} \tau_{2}(q + 1)} ) = o(b^{2} ), \end{array} $$

and

$$\begin{array}{@{}rcl@{}} && {\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} {\int}_{\mathcal{X}_{u}} V[\hat{f}_{\mathbf{b},\rho,\mathbf{\nu}}(\mathbf{x})] d\mathbf{x}_{(u)} d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ && \le n^{-1} {\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} {\int}_{\mathcal{X}_{u}} {\int}_{[0,\infty)^{d}} \{ K_{\mathbf{\mu}_{\mathbf{x}},{\Sigma}_{\mathbf{x}},\mathbf{\nu}}(\mathbf{s}) \}^{2} f(\mathbf{s}) d\mathbf{s} d\mathbf{x}_{(u)} d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ && \le n^{-1} L_{d,\rho,\mathbf{\nu}} \left( \prod\limits_{j = 1}^{d} b_{j}^{-1} \right) {\int}_{\mathcal{X}_{l}} {\int}_{\mathcal{X}_{m}} {\int}_{[0,\infty)^{d}} \left\{ \prod\limits_{j = 1}^{d_{l}+d_{m}} K_{\widetilde{\mu}_{j}(x_{j}),d \sigma_{jj}(x_{j})}^{(LN)}(s_{j}) \right\} \\ && \qquad\times \left\{ b^{d_{u} \tau_{2}(q + 1)} \prod\limits_{j = d_{l}+d_{m}+ 1}^{d} s_{j}^{q + 1} (1 + o(1) ) \right\} f(\mathbf{s}) d\mathbf{s} d\mathbf{x}_{(m)} d\mathbf{x}_{(l)} \\ && = O(n^{-1} b^{-d + d_{l} \tau_{1} - d_{m} \tau_{2} + d_{u} \tau_{2}(q + 1)} ) = o(n^{-1} b^{-d/2} ), \end{array} $$

using Lemmas A.2–A.4. □

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Igarashi, G. Multivariate Density Estimation Using a Multivariate Weighted Log-Normal Kernel. Sankhya A 80, 247–266 (2018). https://doi.org/10.1007/s13171-018-0125-y

Download citation

Keywords and phrases

  • Nonparametric density estimation
  • Boundary problem
  • Asymmetric kernel
  • Multivariate log-normal density

AMS (2000) subject classification

  • Primary 62G07
  • Secondary 62G20