Skip to main content
Log in

Local Influence Detection of Conditional Mean Dependence

  • Published:
Communications in Mathematics and Statistics Aims and scope Submit manuscript

Abstract

This article is focused on the problem to measure and test the conditional mean dependence of a response variable on a predictor variable. A local influence detection approach is developed combining with the martingale difference divergence (MDD) metric, and an efficient wild bootstrap implementation is given. The obtained new metric of the conditional mean dependence holds the merits of MDD, while it is more sensitive than the original one, and leads to a powerful test to nonlinear relationships. It is shown by simulations that the proposed test can achieve higher power for general conditional mean dependence relationships even in high-dimensional settings. Theoretical asymptotic properties of the local influence test statistic are given, and a real data analysis is also presented for further illustration. The localization idea could be combined with other conditional mean dependence metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Arcones, M.A., Giné, E.: Limit theorems for U-processes. Ann. Probab. 21(3), 1494–1542 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  2. Fang, L., Yuan, Q., Ye, C., Yin, X.: High-dimensional variable screening via conditional martingale difference divergence. arXiv preprint arXiv:2206.11944 (2022)

  3. Free, S., O’Higgins, P., Maudgil, D., Dryden, I., Lemieux, L., Fish, D., Shorvon, S.: Landmark-based morphometrics of the normal adult brain using mri. Neuroimage 13(5), 801–813 (2001)

    Article  Google Scholar 

  4. Kendall, D.G.: Shape manifolds, procrustean metrics, and complex projective spaces. Bull. Lond. Math. Soc. 16(2), 81–121 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  5. Kosorok, M.R.: Introduction to empirical processes and semiparametric inference. Springer, New York (2007)

    MATH  Google Scholar 

  6. Lai, T., Zhang, Z., Wang, Y.: A kernel-based measure for conditional mean dependence. Comput. Stat. Data Anal. 160, 107246 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  7. Lee, C., Zhang, X., Shao, X.: Testing conditional mean independence for functional data. Biometrika 107(2), 331–346 (2020)

    MathSciNet  MATH  Google Scholar 

  8. Li, R., Xu, K., Zhou, Y., Zhu, L.: Testing the effects of high-dimensional covariates via aggregating cumulative covariances. J. Am. Stat. Assoc. (2022). https://doi.org/10.1080/01621459.2022.2044334

    Article  MATH  Google Scholar 

  9. Lyons, R.: Distance covariance in metric spaces. Ann. Probab. 41(5), 3284–3305 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  10. Nolan, D., Pollard, D.: Functional limit theorems for U-processes. Ann. Probab. 16(3), 1291–1298 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  11. Pan, W., Wang, X., Zhang, H., Zhu, H., Zhu, J.: Ball covariance: a generic measure of dependence in banach space. J. Am. Stat. Assoc. 115(529), 307–317 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  12. Park, T., Shao, X., Yao, S.: Partial martingale difference correlation. Electron. J. Stat. 9(1), 1492–1517 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  13. Shao, X., Zhang, J.: Martingale difference correlation and its use in high-dimensional variable screening. J. Am. Stat. Assoc. 109(507), 1302–1318 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. Székely, G.J., Rizzo, M.L., Bakirov, N.K.B.: Measuring and testing dependence by correlation of distances. Ann. Stat. 35(6), 2769–2794 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  15. Vaart, A.W.V.D., Wellner, J.A.: Weak convergence and empirical processes. Springer, New York (1996)

    Book  MATH  Google Scholar 

  16. Victor, H., Victor, H., la Peña, D., de la Peña, V., Giné, E.: Decoupling: from dependence to independence. Springer, New York (1999)

    MATH  Google Scholar 

  17. Wang, G., Zhu, K., Shao, X.: Testing for the martingale difference hypothesis in multivariate time series models. J. Bus. Econ. Stat. 40, 1–15 (2021)

    MathSciNet  Google Scholar 

  18. Zhang, X., Yao, S., Shao, X.: Conditional mean and quantile dependence testing in high dimension. Ann. Stat. 46(1), 219–246 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  19. Zhou, T., Zhu, L., Xu, C., Li, R.: Model-free forward screening via cumulative divergence. J. Am. Stat. Assoc. 115(531), 1393–1405 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  20. Zhu, J., Pan, W., Zheng, W., Wang, X.: Ball: an r package for detecting distribution difference and association in metric spaces. J. Stat. Softw. 97(1), 1–31 (2021)

    Google Scholar 

Download references

Acknowledgements

The authors are very grateful to the Editors and two anonymous referees for their helpful suggestions. The research was partly supported by the Project of Improving the Basic Scientific Research Ability of Young and Middle-aged College Teachers in Guangxi (2023KY0058) and the National Natural Science Foundation of China (No. 12271014 and No.11971045).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhongzhan Zhang.

Ethics declarations

Conflict of interest

The authors declare that they have no competing interests.

6 Appendix

6 Appendix

Proof of Proposition 2.1

Note that \(\Psi (Y, Y^{\prime })=\langle Y - \mu , Y^{\prime }-\mu \rangle \), where \(\mu = E(Y)\). Therefore,

$$\begin{aligned} -E\{\Phi (X, X^{\prime })\Psi (Y, Y^{\prime })\}&=-E\left\{ \left( \vert X-X^{\prime }\vert - \vert X^{\prime \prime }-X^{\prime } \vert \right) \Psi (Y, Y^{\prime })\right\} \\&= -E\left\{ \vert X-X^{\prime }\vert \Psi (Y, Y^{\prime })\right\} +E\left\{ \vert X^{\prime \prime }-X^{\prime } \vert \Psi (Y, Y^{\prime })\right\} \\&=-E\left\{ \vert X-X^{\prime }\vert \Psi (Y, Y^{\prime })\right\} \\&= \textrm{FMDD}(Y\vert X). \end{aligned}$$

The third equation holds since

$$\begin{aligned} E\left\{ \vert X^{\prime \prime }-X^{\prime } \vert \Psi (Y, Y^{\prime })\right\} =E\left\{ \vert X^{\prime \prime }-X^{\prime } \vert \langle E(Y) - \mu , Y^{\prime }-\mu \rangle \right\} =0. \end{aligned}$$

We complete the proof. \(\square \)

Proof of Theorem 2.5

Since \(E(Y\vert X)=E(Y)\) almost surely, we have, for any \(t\in [0, 1]\),

$$\begin{aligned} \textrm{LMDD}_{Y\vert X}(t)&=-E[E\{\Phi _{\rho }(X, X^{\prime })L_{X, X^{\prime }}(t)\Psi (Y, Y^{\prime }) \vert X, X^{\prime }\}]\\&= -E[\Phi _{\rho }(X, X^{\prime })L_{X, X^{\prime }}(t)E\{\Psi (Y, Y^{\prime }) \vert X, X^{\prime }\}]\\&= -E[\Phi _{\rho }(X, X^{\prime })L_{X, X^{\prime }}(t)E\{\Psi (Y, Y^{\prime })\}]\\&=0. \end{aligned}$$

The last equation holds since \(E\{\Psi (Y, Y^{\prime })\}=0\).

The second assertion of the theorem is clear since \( L_{X, X^{\prime }}(1)=1 \) when \( t=1 \), which leads to \( \textrm{LMDD}_{Y\vert X}(1)= \textrm{FMDD}_{\rho }(Y\vert X)\). \(\square \)

Proof of Theorem 2.6

For any \(\delta \) with understanding that \(t+\delta \in [0, 1]\), we have

$$\begin{aligned}&\vert \textrm{LMDD}_{Y\vert X}(t+\delta )-\textrm{LMDD}_{Y\vert X}(t) \vert \\&\quad = \vert E\{\Phi _{\rho }(X, X^{\prime })L_{X, X^{\prime }}(t+\delta )\Psi (Y, Y^{\prime })\}-E\{\Phi _{\rho }(X, X^{\prime })L_{X, X^{\prime }}(t)\Psi (Y, Y^{\prime })\} \vert \\&\quad = \vert E[\Phi _{\rho }(X, X^{\prime })\Psi (Y, Y^{\prime })\{L_{X, X^{\prime }}(t+\delta )-L_{X, X^{\prime }}(t)\}] \vert . \end{aligned}$$

Next we prove that \(E[\Phi _{\rho }(X, X^{\prime })\Psi (Y, Y^{\prime })\{L_{X, X^{\prime }}(t+\delta )-L_{X, X^{\prime }}(t)\}]\) converges to zero as \(\delta \rightarrow 0\).

By the conditions that \( \rho (X, X^{\prime }) \) is a continuous random variable and that g is continuous, \(L_{X, X^{\prime }}(t+\delta )\) converges almost surely to \(L_{X, X^{\prime }}(t)\) as \( \delta \rightarrow 0 \); therefore, \(\Phi _{\rho }(X, X^{\prime })\Psi (Y, Y^{\prime })(L_{X, X^{\prime }}(t+\delta )-L_{X, X^{\prime }}(t))\rightarrow 0\) almost surely. Since \( \vert \Phi _{\rho }(X, X^{\prime })\Psi (Y, Y^{\prime })(L_{X, X^{\prime }}(t+\delta )-L_{X, X^{\prime }}(t)) \vert \le 2 \vert \Phi _{\rho }(X, X^{\prime })\Psi (Y, Y^{\prime }) \vert \), if \(E[ \vert \Phi _{\rho }(X, X^{\prime })\Psi (Y, Y^{\prime }) \vert ] < \infty \), by the dominated convergence theorem, \(E \vert \Phi _{\rho }(X, X^{\prime })\Psi (Y, Y^{\prime })(L_{X, X^{\prime }}(t+\delta )-L_{X, X^{\prime }}(t)) \vert \) converges to zero as \(\delta \rightarrow 0\). Then with this result and the inequality

$$\begin{aligned}&\vert E[\Phi _{\rho }(X, X^{\prime })\Psi (Y, Y^{\prime })(L_{X, X^{\prime }}(t+\delta )-L_{X, X^{\prime }}(t))] \vert \\&\quad \le E[ \vert \Phi _{\rho }(X, X^{\prime })\Psi (Y, Y^{\prime })(L_{X, X^{\prime }}(t+\delta )-L_{X, X^{\prime }}(t)) \vert ], \end{aligned}$$

the conclusion follows.

We verify the condition \(E[ \vert \Phi _{\rho }(X, X^{\prime })\Psi (Y, Y^{\prime }) \vert ] < \infty \). By the conditions of Theorem 2.6, there exists an \(o\in {\mathcal {X}}\) such that \(E[\rho (o, X)] <\infty \) and \(E \vert Y \vert <\infty \). Decomposing \(\Phi _{\rho }(X, X^{\prime })\Psi (Y, Y^{\prime })\), we have

$$\begin{aligned}&\Phi _{\rho }(X, X^{\prime })\Psi (Y, Y^{\prime })\\&\quad =\rho (X, X^{\prime })\langle Y , Y^{\prime } \rangle -\rho (X, X^{\prime })\langle Y , \mu \rangle -\rho (X, X^{\prime })\langle \mu , Y^{\prime } \rangle +\rho (X, X^{\prime })\langle \mu , \mu \rangle \\&\qquad - E_{X^{\prime \prime }}[\rho (X, X^{\prime \prime })]\langle Y , Y^{\prime } \rangle +E_{X^{\prime \prime }}[\rho (X, X^{\prime \prime })]\langle Y , \mu \rangle +E_{X^{\prime \prime }}[\rho (X, X^{\prime \prime })]\langle \mu , Y^{\prime } \rangle \\&\qquad -E_{X^{\prime \prime }}[\rho (X, X^{\prime \prime })]\langle \mu , \mu \rangle , \end{aligned}$$

where \(\mu =E(Y)\) and \(E_{X^{\prime \prime }}\) means taking expectation with respect to \(X^{\prime \prime }\). It can be verified that the expectation of the absolute value of each term in the above display is bounded. We only show the term \(\rho (X, X^{\prime })\langle Y, Y^{\prime }\rangle \), and the others can be done by similar arguments. By direct computations,

$$\begin{aligned} E\{ \vert \rho (X, X^{\prime })\langle Y , Y^{\prime }\rangle \vert \}&\le E\{\rho (X, 0) \vert Y \vert \vert Y^{\prime } \vert \} +E\{\rho (o, X^{\prime }) \vert Y \vert \vert Y^{\prime } \vert \}\\&= E\{\rho (X, 0) \vert Y \vert \}E( \vert Y^{\prime } \vert ) +E\{\rho (o, X^{\prime }) \vert Y^{\prime } \vert \}E( \vert Y \vert )<\infty , \end{aligned}$$

which completes the proof. \(\square \)

Proof of Theorem 2.10

We will drop the argument t and simply denote \({\tilde{C}}_{ij}(t)\) as \(c_{ij}\). The \(\textrm{LMDD}_{n, Y\vert X}(t)\) can be decomposed into 9 terms as follows,

$$\begin{aligned}&\textrm{LMDD}_{n, Y\vert X}(t) \nonumber \\&\quad = -\frac{1}{n(n-1)}\sum _{i\ne j}a_{ij}b_{ij}c_{ij} + \frac{1}{n(n-1)}\sum _{i\ne j}a_{ij}b_{i\cdot }c_{ij} + \frac{1}{n(n-1)}\sum _{i\ne j}a_{ij}b_{\cdot j}c_{ij} \nonumber \\&\qquad -\frac{1}{n(n-1)}\sum _{i\ne j}a_{ij}b_{\cdot \cdot }c_{ij} +\frac{1}{n(n-1)}\sum _{i\ne j}a_{i\cdot }b_{ij}c_{ij} - \frac{1}{n(n-1)}\sum _{i\ne j}a_{i\cdot }b_{i\cdot }c_{ij} \nonumber \\&\qquad -\frac{1}{n(n-1)}\sum _{i\ne j}a_{i\cdot }b_{\cdot j}c_{ij} + \frac{1}{n(n-1)}\sum _{i\ne j}a_{i\cdot }b_{\cdot \cdot }c_{ij} - \frac{1}{n(n-1)}\sum _{i\ne j}R_{ij}\nonumber \\&\quad :=J_1+J_2+\cdots + J_9, \end{aligned}$$
(6.1)

where

$$\begin{aligned}&a_{i\cdot }=\frac{1}{n-1}\sum _{k=1}^{n}a_{ik}, \qquad b_{i\cdot }= \frac{1}{n-2}\sum _{s=1}^{n}b_{is}, \\&b_{\cdot j} = \frac{1}{n-2}\sum _{l=1 }^{n}b_{lj}, \qquad b_{\cdot \cdot }=\frac{1}{(n-2)(n-3)}\sum _{s, l=1}^{n}b_{sl}. \end{aligned}$$

Meanwhile, the \(\textrm{U}\)-process \(U_n(t)\) defined in (2.9) can be rewritten as

$$\begin{aligned} U_n(t) =&-\frac{1}{n(n-1)}\sum _{i\ne j}a_{ij}b_{ij}c_{ij} + \frac{1}{n(n-1)(n-2)}\sum _{(i,j,l)}a_{ij}b_{il}c_{ij} \\&+ \frac{1}{n(n-1)(n-2)}\sum _{(i, j,s)}a_{ij}b_{s j}c_{ij} - \frac{1}{n(n-1)(n-2)(n-3)}\sum _{(i,j,s,l)}a_{ij}b_{sl}c_{ij}\\&+\frac{1}{n(n-1)(n-2)}\sum _{(i,j,k j)}a_{ik}b_{ij}c_{ij} - \frac{1}{n(n-1)(n-2)(n-3)}\sum _{(i, j, k, l)}a_{ik}b_{il}c_{ij}\\&-\frac{1}{n(n-1)(n-2)(n-3)}\sum _{(i, j, k, s)}a_{ik}b_{s j}c_{ij} \\&+ \frac{1}{n(n-1)(n-2)(n-3)(n-4)}\sum _{(i, j, k, s, l)}a_{ik}b_{sl}c_{ij} \\ :=&J_1^*+J_2^*+\cdots + J_8^*, \end{aligned}$$

where and hereinafter \(\sum _{(i_1,\dots ,i_k)}\) means the summation over all possible k-tuples satisfying that \(i_1,\dots ,i_k\) are different from each other.

It can be verified that

$$\begin{aligned}&\sup _{t\in [0, 1]}\{ \vert J_i-J^*_i \vert \}\xrightarrow [ ]{a.s.} 0, \qquad \text {for } i =1, \cdots , 8, \nonumber \\&\sup _{t\in [0, 1]}\{ \vert J_9 \vert \}\xrightarrow [ ]{a.s.} 0. \end{aligned}$$
(6.2)

We only verify \(\sup _{t\in [0, 1]}\{ \vert J_4-J^*_4 \vert \}\xrightarrow [ ]{a.s.} 0\), and the others can be done in a similar way. By direct calculations,

$$\begin{aligned}&\vert J_4-J^*_4 \vert \\&\quad =\Bigg \vert \frac{1}{n(n-1)(n-2)(n-3)}\sum _{i\ne j}\sum _{s,l=1}^{n}a_{ij}b_{sl }c_{ij} -\frac{1}{n(n-1)(n-2)(n-3)}\sum _{(i,j,s,l)}a_{ij}b_{sl}c_{ij}\Bigg \vert \\&\quad =\Bigg \vert \frac{1}{n(n-1)(n-2)(n-3)}\sum _{i\ne j}\sum _{s=1}^{n}a_{ij}b_{si}c_{ij} +\frac{1}{n(n-1)(n-2)(n-3)}\sum _{i\ne j}\sum _{s=1}^{n}a_{ij}b_{sj}c_{ij}\\&\qquad +\frac{1}{n(n-1)(n-2)(n-3)}\sum _{i\ne j}\sum _{l=1}^{n}a_{ij}b_{il}c_{ij} +\frac{1}{n(n-1)(n-2)(n-3)}\sum _{i\ne j}\sum _{l=1}^{n}a_{ij}b_{jl}c_{ij}\Bigg \vert \\&\quad \le \Bigg \vert \frac{1}{n(n-1)(n-2)(n-3)}\sum _{i\ne j}\sum _{s=1}^{n}a_{ij}b_{si}\Bigg \vert +\Bigg \vert \frac{1}{n(n-1)(n-2)(n-3)}\sum _{i\ne j}\sum _{s=1}^{n}a_{ij}b_{sj}\Bigg \vert \\&\qquad +\Bigg \vert \frac{1}{n(n-1)(n-2)(n-3)}\sum _{i\ne j}\sum _{l=1}^{n}a_{ij}b_{il}\Bigg \vert +\Bigg \vert \frac{1}{n(n-1)(n-2)(n-3)}\sum _{i\ne j}\sum _{l=1}^{n}a_{ij}b_{jl}\Bigg \vert \\&\quad :=N_1+\cdots +N_4. \end{aligned}$$

It is easy to show that \(N_i\xrightarrow [ ]{a.s.}0\), \(i=1, \cdots , 4\). We only show \(N_1\xrightarrow [ ]{a.s.}0\). To this end, observe that

$$\begin{aligned}&\frac{1}{n(n-1)(n-2)(n-3)}\sum _{i\ne j}\sum _{s=1}^{n}a_{ij}b_{si}\\&\quad = \frac{1}{n(n-1)(n-2)(n-3)}\sum _{(i, j, k)}a_{ij}b_{si}+\frac{1}{n(n-1)(n-2)(n-3)}\sum _{i\ne j}a_{ij}b_{ji} \end{aligned}$$

and that, by the law of large number for \(\textrm{U}\)-statistics, \(\frac{1}{n(n-1)(n-2)}\sum _{i, j, k}a_{ij}b_{si}\xrightarrow []{a.s.}E[a_{12}b_{31}]\) and \(\frac{1}{n(n-1)}\sum _{i\ne j}a_{ij}b_{ji}\xrightarrow []{a.s.}E[a_{12}b_{21}]\) provided that \(E[ \vert a_{12}b_{31} \vert ]<\infty \), \(E[ \vert a_{12}b_{21} \vert ]<\infty \). And this is the case since that

$$\begin{aligned} E[ \vert a_{12}b_{31} \vert ]=&E\rho (X_1, X_2) \vert \langle Y_1 , Y_3 \rangle \vert \nonumber \\ \le&E[\rho (X_1, o) \vert Y_1 \vert ] E[ \vert Y_3 \vert ] +E[\rho (X_2, o)] E[ \vert Y_1 \vert ] E[ \vert Y_3 \vert ] <\infty , \end{aligned}$$
(6.3)

and

$$\begin{aligned} E[ \vert a_{12}b_{12} \vert ]=&E\rho (X_1, X_2) \vert \langle Y_1 , Y_2 \rangle \vert \nonumber \\ \le&E[\rho (X_1, o) \vert Y_1 \vert ] E[ \vert Y_2 \vert ] +E[\rho (X_2, o) \vert Y_2 \vert ] E[ \vert Y_1 \vert ] <\infty . \end{aligned}$$
(6.4)

Therefore, \(N_1\xrightarrow []{a.s.}0\).

By (6.2), we have \(\sup _{t\in [0, 1]}\{ \vert \textrm{LMDD}_{n,Y\vert X}(t)-U_n(t) \vert \}\xrightarrow []{a.s.}0\). Thus, to proof that \(\sup _{t\in [0, 1]}\{ \vert \textrm{LMDD}_{n,Y\vert X}(t)-\textrm{LMDD}_{Y\vert X}(t) \vert \}\xrightarrow []{a.s.}0\) we only need to show that \(\sup _{t\in [0, 1]}\{ \vert U_n(t)-\textrm{LMDD}_{Y\vert X}(t) \vert \}\xrightarrow []{a.s.}0\). Since \(U_n(t)\) is an \(\textrm{U}\)-process and \(E[U_n(t)]=\textrm{LMDD}_{Y\vert X}(t)\), we can achieve the goal by applying the theory of \(\textrm{U}\)-processes. Specifically, by Corollary 3.3 of Arcones and Giné [1], we only need to verify that the function class \({\mathcal {F}}=\{h_t(w_1, \dots , w_5): t \in [0, 1]\}\) is an image-admissible Suslin (the definition can be seen on page 138 of Victor et al. [16]) VC- class with an envelop function \(G(w_1, \dots , w_5)\) satisfying \(E\{G(W_1, \dots , W_5)\}<\infty \), where \(W_i=(X_i, Y_i)\), \(w_i=(x_i, y_i)\) and

$$\begin{aligned} h_t(w_1, \dots , w_5)=&\left[ \rho (x_1, x_2)-\rho (x_1, x_3)\right] \left[ \langle y_1, y_2\rangle -\langle y_1 ,y_4\rangle -\langle y_2, y_5\rangle +\langle y_4, y_5\rangle \right] \\&\cdot I(g(\rho (x_1, x_2))\le t). \end{aligned}$$

We verify these conditions in the following. Choose the envelop function as \(G(w_1, \cdots , w_5)=|\left\{ \rho (x_1, x_2)-\rho (x_1, x_3)\right\} \left\{ \langle y_1, y_2\rangle -\langle y_1,y_4\rangle -\langle y_2, y_5\rangle +\langle y_4, y_5\rangle \right\} |\). By the conditions of the theorem, we have

$$\begin{aligned}&E\{G(W_1, \dots , W_5)\}\\&\quad =E|\left\{ \rho (X_1, X_2)-\rho (X_1, X_3)\right\} \left\{ \langle Y_1, Y_2\rangle -\langle Y_1, Y_4\rangle -\langle Y_5, Y_2\rangle +\langle Y_4, Y_5\rangle \right\} |\\&\quad \le E|\rho (X_1, X_2)\langle Y_1, Y_2\rangle |+ E|\rho (X_1, X_2)\langle Y_1, Y_4\rangle |+ E|\rho (X_1, X_2)\langle Y_5, Y_2\rangle |\\&\qquad + E|\rho (X_1, X_2)\langle Y_4, Y_5\rangle |+ E|\rho (X_1, X_3)\langle Y_1, Y_2\rangle |+ E|\rho (X_1, X_3)\langle Y_1, Y_4\rangle |\\&\qquad + E|\rho (X_1, X_3)\langle Y_5, Y_2\rangle |+ E|\rho (X_1, X_3)\langle Y_4, Y_5\rangle |\\&\quad < \infty , \end{aligned}$$

where the last inequality is obtained by using similar arguments as in (6.3) and (6.4).

Next, we show the \({\mathcal {F}}\) is an image-admissible Suslin VC-class. We first show that the set \({\mathcal {C}}=\{C_t: t\in [0, 1]\}\) is a VC-class, \(C_t=\{(x, x^{\prime }): g(\rho (x, x^{\prime }))\le t\}\). Suppose that \(p_1=(x_1, x_1^{\prime })\), \(p_2=(x_2, x_2^{\prime })\) are two different points in \({\mathcal {X}}\times {\mathcal {X}}\) and \({\mathcal {C}}\) shatters the sets of these two points. Then there exist \(0\le t_1,t_2\le 1\) such that \(p_1\in C_{t_1}, p_2\not \in C_{t_1}\), \(p_2\in C_{t_2}, p_1\not \in C_{t_2}\). Without lose of generality, assume \(0 \le t_1 < t_2 \le 1\). By the form of \(C_t\), we have \(C_{t_1}\subset C_{t_2}\). Then \(p_1\in C_{1}\) which is contradict to the fact that \(p_1\not \in C_{1}\). Hence, \({\mathcal {C}}\) cannot shatter the set \(\{p_1, p_2\}\) and \({\mathcal {C}}\) is a VC-class with VC-index 2. By Lemma 9.8 of Kosorok [5], the function class \(\{I(g(\rho (X, X^{\prime }))\le t): t \in [0, 1]\}\) is a VC-class. By Lemma 9.9 (vi) of Kosorok [5], \({\mathcal {F}}\) is a VC-class. Since the class of kernels \( {\mathcal {F}} \) is parametrized by [0, 1] and these kernels are jointly measurable in \( w_1, \cdots , w_5, t \), this class is image-admissible Suslin. Therefore, \({\mathcal {F}}\) is an image-admissible Suslin VC-class. We complete the proof. \(\square \)

Proof of Theorem 2.11

Observe that

$$\begin{aligned} n\textrm{LMDD}_{n, Y\vert X}(t)&=\frac{-1}{n-1}\sum _{i\ne j}\left[ {\widetilde{A}}_{ij}{\widetilde{B}}_{ij}{\widetilde{C}}_{ij}(t)+R_{ij}\right] \nonumber \\&= -\frac{1}{n-1}\sum _{i\ne j}a_{ij}b_{ij}c_{ij} + \frac{1}{n-1}\sum _{i\ne j}a_{ij}b_{i\cdot } c_{ij}+ \frac{1}{n-1}\sum _{i\ne j}a_{ij}b_{\cdot j}c_{ij} \nonumber \\&\quad -\frac{1}{n-1}\sum _{i\ne j}a_{ij}b_{\cdot \cdot }c_{ij}+\frac{1}{n-1}\sum _{i\ne j}a_{i\cdot }b_{ij}c_{ij} - \frac{1}{n-1}\sum _{i\ne j}a_{i\cdot }b_{i\cdot }c_{ij} \nonumber \\&\quad -\frac{1}{n-1}\sum _{i\ne j}a_{i\cdot }b_{\cdot j}c_{ij}+ \frac{1}{n-1}\sum _{i\ne j}a_{i\cdot }b_{\cdot \cdot }c_{ij} - \frac{1}{n-1}\sum _{i\ne j}R_{ij}. \end{aligned}$$
(6.5)

By straightforward computations, we have

$$\begin{aligned} \frac{1}{n-1}\sum _{i\ne j}a_{ij}b_{i\cdot }c_{ij}&=\frac{1}{(n-1)(n-2)}\sum _{(i, j, s)}a_{ij}b_{is}c_{ij} + \frac{1}{(n-1)(n-2)}\sum _{i\ne j}a_{ij}b_{ij}c_{ij}\\&=\frac{1}{(n-1)(n-2)}\sum _{(i, j, s)}a_{ij}b_{is}c_{ij} + E[a_{12}b_{12}c_{12}] + o_p(1). \end{aligned}$$

Under \(H_0\), \(E(Y\vert X)=E(Y)\) almost surely; therefore, we have

$$\begin{aligned} E[a_{12}b_{12}c_{12}]&=E[\rho (X_1, X_2)\langle Y_1, Y_2\rangle L_{X_1, X_2}(t)]\\&=E\{E[\rho (X_1, X_2)\langle Y_1, Y_2\rangle L_{X_1, X_2}(t)] \vert X_1, X_2\}\\&=E\{\rho (X_1, X_2)L_{X_1, X_2}(t)E[\langle Y_1, Y_2\rangle ] \vert X_1, X_2\}\\&=E\{\rho (X_1, X_2)L_{X_1, X_2}(t)E[\langle Y_1, Y_2\rangle ]\}\\&=E[\rho (X_1, X_2)L_{X_1, X_2}(t)]E[\langle Y_1, Y_2\rangle ]\\&=E[a_{12}c_{12}]E[b_{12}]. \end{aligned}$$

It follows that

$$\begin{aligned} \frac{1}{n-1}\sum _{i\ne j}a_{ij}b_{i\cdot }c_{ij}=\frac{1}{(n-1)(n-2)}\sum _{(i, j, s)}a_{ij}b_{is}c_{ij} + E[a_{12}c_{12}]E[b_{12}] + o_p(1). \end{aligned}$$

Similarly, we have

$$\begin{aligned} \frac{1}{n-1}\sum _{i\ne j}a_{ij}b_{\cdot j}c_{ij} =&\frac{1}{(n-1)(n-2)}\sum _{(i, j, l)}a_{ij}b_{lj}c_{ij} + E[a_{12}c_{12}]E[b_{12}] + o_p(1), \\ \frac{1}{n-1}\sum _{i\ne j}a_{ij}b_{\cdot \cdot }c_{ij}=&\frac{1}{(n-1)(n-2)(n-3)}\sum _{(i, j, s, l)}a_{ij}b_{sl}c_{ij} + 2E[a_{12}c_{12}]E[b_{13}] \\&+2E[a_{12}c_{12}]E[b_{23}] + o_p(1), \\ \frac{1}{n-1}\sum _{i\ne j}a_{i\cdot }b_{ij}c_{ij}=&\frac{1}{(n-1)(n-2)}\sum _{(i, j, l)}a_{il}b_{ij}c_{ij} + E[a_{12}c_{12}]E[b_{12}] \\&- E[a_{13}c_{12}]E[b_{12} ] + o_p(1), \\ \frac{1}{n-1}\sum _{i\ne j}a_{i\cdot }b_{i\cdot }c_{ij} =&\frac{1}{(n-1)(n-2)(n-3)}\sum _{(i, j, s, l)}a_{is}b_{il}c_{ij} +E[a_{12}c_{12}]E[b_{13}]\\&+E[a_{13}c_{12}]E[b_{12}]+E[a_{13}c_{12}]E[b_{13}]\\&-2E[a_{13}c_{12}]E[b_{14}] + o_p(1),\\ \frac{1}{n-1}\sum _{i\ne j}a_{i\cdot }b_{\cdot j}c_{ij} =&\frac{1}{(n-1)(n-2)(n-3)}\sum _{(i, j, k, s)}a_{ik}b_{sj}c_{ij} +E[a_{12}c_{12}]E[b_{32}]\\&+E[a_{13}c_{12}]E[b_{12}]+E[a_{13}c_{12}]E[b_{32}]\\&-2E[a_{13}c_{12}]E[b_{42}] + o_p(1), \\ \frac{1}{n-1}\sum _{i\ne j}a_{i\cdot }b_{\cdot \cdot }c_{ij} =&\frac{1}{(n-1)(n-2)(n-3)(n-4)}\sum _{(i, j, k, s, l)}a_{ik}b_{sl}c_{ij} \\&+ 2E[a_{13}c_{12}]E[b_{14}]\\&+2E[a_{13}c_{12}]E[b_{24}]+2E[a_{13}c_{12}]E[b_{34}]+E[a_{12}c_{12}]E[b_{45}]\\&-3E[a_{13}c_{12}]E[b_{45}] + o_p(1),\\ \frac{1}{n-1}\sum _{i\ne j}R_{ij}=&2E[a_{13}c_{12}-a_{12}c_{12}]E[b_{12}] +o_p(1). \end{aligned}$$

Since \(E[b_{12}]=E[b_{13}]=E[b_{14}]=E[b_{23}]=E[b_{24}]=E[b_{34}]=E[b_{45}]\), we have

$$\begin{aligned} \frac{-1}{n-1}\sum _{i\ne j}{\widetilde{A}}_{ij}{\widetilde{B}}_{ij}c_{ij}&=nU_n +o_p(1), \end{aligned}$$

where \(U_n(t)\) is the \(\textrm{U}\)-process defined in equation (2.9). Based on this result, we can utilize the theory of \(\textrm{U}\)-processes to study the convergence of \(n\textrm{LMDD}_{n, Y\vert X}(t)\). We will show that for every \(t\in [0, 1]\) the kernel of \(U_n(t)\) is degenerate if \(H_0\) is true.

Recall that the kernel of \(U_n(t)\) is

$$\begin{aligned} h_t(W_1, \dots , W_5)=&\left( \rho (X_1, X_2)-\rho (X_1, X_3)\right) L_{X_1, X_2}(t)\\&\cdot \left( \langle Y_1, Y_2\rangle -\langle Y_1, Y_4\rangle -\langle Y_2, Y_5\rangle +\langle Y_4, Y_5\rangle \right) . \end{aligned}$$

The corresponding symmetrical kernel is

$$\begin{aligned} {\bar{h}}_t(W_1, \dots , W_5)=\frac{1}{5!}\sum h_t(W_{\pi (1)}, \dots , W_{\pi (5)})=S_5h_t(W_1, \dots , W_5), \end{aligned}$$

in which the sum extends over all permutations \( (\pi (1), \dots , \pi (5)) \) of \( \{1, 2, 3, 4, 5\} \). Under \(H_0\), we have

$$\begin{aligned}&E\left( h_t(W_1, \dots , W_5) \vert W_1=(x, y)\right) \\&\quad =E\left[ \left( \rho (x, X_2)-\rho (x, X_3)\right) L_{x, X_2}(t)\left( \langle y, Y_2\rangle -\langle y, Y_4\rangle -\langle Y_2, Y_5\rangle +\langle Y_4, Y_5\rangle \right) \right] \\&\quad =E\left\{ \left( \rho (x, X_2)-\rho (x, X_3)\right) L_{x, X_2}(t)E\left[ \left( \langle y, Y_2\rangle -\langle y, Y_4\rangle \right. \right. \right. \\&\qquad \left. \left. \left. -\langle Y_2, Y_5\rangle +\langle Y_4, Y_5\rangle \right) \vert X_2, X_3\right] \right\} \\&\quad =E\left[ \left( \rho (x, X_2)-\rho (x, X_3)\right) L_{x, X_2}(t)\right] E\left[ \left( \langle y, Y_2\rangle -\langle y, Y_4\rangle -\langle Y_2, Y_5\rangle +\langle Y_4, Y_5\rangle \right) \right] \\&\quad =0. \end{aligned}$$

Similarly, we have \(E\left( h_t(W_1, \dots , W_5) \vert W_i=(x, y)\right) =0\) for \(i=2, \dots , 5\). This means \(E\left( {\bar{h}}(W_1, \dots , W_5) \vert W_1=(x, y)\right) =0\), which indicates that \(U_n(t)\) is degenerate for every \(t\in [0, 1]\). By the proof of Theorem 2.10, we know that the function class

$$\begin{aligned} {\mathcal {F}}&=\{h_t: t \in [0, 1]\}\\&= \{G(w_1, \cdots , w_5)I(g(\rho (x_1, x_2))\le t): t \in [0, 1]\} \end{aligned}$$

is an image-admissible Suslin \(\textrm{VC}\)-class and \(G(w_1, \cdots , w_5)\) is an envelop function of \({\mathcal {F}}\). Since X and Y have finite second moments, under \( H_0 \), we have

$$\begin{aligned}&E[G^2(W_1, \cdots , W_5)]\\&\quad =E[\left\{ \rho (X_1, X_2)-\rho (X_1, X_3)\right\} ^2 \vert \langle Y_1, Y_2\rangle -\langle Y_1, Y_4\rangle -\langle Y_2, Y_5\rangle +\langle Y_4, Y_5\rangle \vert ^2]\\&\quad =E[\left\{ \rho (X_1, X_2)-\rho (X_1, X_3)\right\} ^2 E\left\{ \langle Y_1, Y_2\rangle -\langle Y_1, Y_4\rangle \right. \\&\qquad \left. -\langle Y_2, Y_5\rangle +\langle Y_4, Y_5\rangle \right\} ^2 \vert X_1, X_2, X_3]\\&\quad =E[\left\{ \rho (X_1, X_2)-\rho (X_1, X_3)\right\} ^2]E[\left\{ \langle Y_1, Y_2\rangle -\langle Y_1, Y_4\rangle -\langle Y_2, Y_5\rangle +\langle Y_4, Y_5\rangle \right\} ^2]\\&\quad \le 10E[\rho ^2(X_1, X_2)+\rho ^2(X_1, X_3)]E[\langle Y_1, Y_2\rangle ^2+\langle Y_1, Y_4\rangle ^2+\langle Y_2, Y_5\rangle ^2+\langle Y_4, Y_5\rangle ^2]\\&\quad \le 10E[8\rho ^2(o, X)]E[4 \vert Y_1 \vert ^2 \vert Y_2 \vert ^2]\\&\quad <\infty . \end{aligned}$$

By Corollary 5.7 of Arcones and Giné [1],

$$\begin{aligned} nU_n\rightarrow _{{\mathcal {L}}}{\mathcal {K}}=\{K_{p, 2}(\pi _{2, 5}S_5h_t): t \in [0, 1]\}. \end{aligned}$$

By Slutsky lemma, we have

$$\begin{aligned} n\textrm{LMDD}_{n, Y\vert X}(t)\rightarrow _{{\mathcal {L}}}{\mathcal {K}}, \end{aligned}$$

which completes the proof. \(\square \)

Proof of Corollary 2.13

By the continuous mapping theorem, the conclusion follows. \(\square \)

Proof of Theorem 2.14

We first show that \( \sqrt{n}\textrm{LMDD}_{n, Y\vert X}(t)=\sqrt{n}U_n(t) +o_{p}(1)\), where \( U_n(t) \) is the U-process as before. Similar to the proof of Theorem 2.11, we have

$$\begin{aligned}&\sqrt{n}\textrm{LMDD}_{n, Y\vert X}(t)\nonumber \\&\quad =\frac{-\sqrt{n}}{n(n-1)}\sum _{i\ne j}\left[ {\widetilde{A}}_{ij}{\widetilde{B}}_{ij}{\widetilde{C}}_{ij}(t)+R_{ij}\right] \nonumber \\&\quad = -\frac{\sqrt{n}}{n(n-1)}\sum _{i\ne j}a_{ij}b_{ij}c_{ij} + \frac{\sqrt{n}}{n(n-1)}\sum _{i\ne j}a_{ij}b_{i\cdot } c_{ij}+\frac{\sqrt{n}}{n(n-1)}\sum _{i\ne j}a_{ij}b_{\cdot j}c_{ij} \nonumber \\&\qquad -\frac{\sqrt{n}}{n(n-1)}\sum _{i\ne j}a_{ij}b_{\cdot \cdot }c_{ij}+\frac{\sqrt{n}}{n(n-1)}\sum _{i\ne j}a_{i\cdot }b_{ij}c_{ij} - \frac{\sqrt{n}}{n(n-1))}\sum _{i\ne j}a_{i\cdot }b_{i\cdot }c_{ij} \nonumber \\&\qquad -\frac{\sqrt{n}}{n(n-1)}\sum _{i\ne j}a_{i\cdot }b_{\cdot j}c_{ij}+ \frac{\sqrt{n}}{n(n-1)}\sum _{i\ne j}a_{i\cdot }b_{\cdot \cdot }c_{ij} - \frac{\sqrt{n}}{n(n-1)}\sum _{i\ne j}R_{ij}. \end{aligned}$$
(6.6)

Using the similar arguments to the proof of Theorem 2.11, the second term of the right-hand side of equation (6.6) can be expressed as

$$\begin{aligned} \frac{\sqrt{n}}{n(n-1))}\sum _{i\ne j}a_{ij}b_{i\cdot }c_{ij}&=\frac{\sqrt{n}}{n(n-1)(n-2)}\sum _{(i,j, s)}a_{ij}b_{is}c_{ij}\\&\quad + \frac{\sqrt{n}}{n(n-1)(n-2)}\sum _{i\ne j}a_{ij}b_{ij}c_{ij}\\&=\frac{\sqrt{n}}{n(n-1)(n-2)}\sum _{(i,j, s)}a_{ij}b_{is}c_{ij} + o_p(1). \end{aligned}$$

Applying similar calculations for the rest terms, we have

$$\begin{aligned} \sqrt{n}\textrm{LMDD}_{n, Y\vert X}(t)= \sqrt{n}U_n(t) +o_{p}(1). \end{aligned}$$

Therefore, in order to proof the conclusion of Theorem 2.14, it is suffice to show that

$$\begin{aligned} \sqrt{n}(U_n(t)-E[U_n(t)])\rightarrow _{{\mathcal {L}}}5G_P(P^{4}S_5h_t), t\in [0, 1]. \end{aligned}$$

For every \( t\in [0, 1] \), \(U_n(t)\) is a non-degenerate \(\textrm{U}\)-statistic (this can be obtained by similar techniques as in the proof of Lee et al. [7]). Thus, we have \(\sqrt{n}(U_n(t)-E[U_n(t)])\xrightarrow {d}N(0, \sigma (t))\), where \( N(0, \sigma (t)) \) is a normal distribution with mean zero and variance \( \sigma (t) \). Combining this result with the fact that the kernel function class of \( U_n(t) \) is an image-admissible Suslin VC-class, by Theorem 5.3.3 of Victor et al. [16], we have

$$\begin{aligned} \sqrt{n}(U_n(t)-E[U_n(t)])\rightarrow _{{\mathcal {L}}}5G_P(P^{4}S_5h_t), \quad t\in [0, 1]. \end{aligned}$$

This completes the proof. \(\square \)

Proof of Theorem 3.2

For every fixed \(t\in T\), by Theorem 4 of Lee et al. [7], we have

$$\begin{aligned} nU_n^*(t)\xrightarrow {d^*}\sum _{i=1}^{\infty }\lambda _i(t)(N_i^2-1), \end{aligned}$$

here \( d^* \) denotes convergence in distribution given \( Z_1, Z_2, \dots \). Using the similar argument therein, we can obtain the finite marginal convergence, that is,

$$\begin{aligned} (nU_n^*(t_1), \dots ,nU_n^*(t_k)) \xrightarrow {d^*}(\sum _{i=1}^{\infty }\lambda _i(t_1)(N_i^2-1),\dots , \sum _{i=1}^{\infty }\lambda _i(t_k)(N_i^2-1)). \end{aligned}$$

Further, by the conditions that \( (T, \nu ) \) is totally bounded and \( nU^*_n(t) \) is asymptotically uniformly \( \nu \)-equicontinuous in probability, we have, by Theorems 1.5.4 and 1.5.7 of Vaart and Wellner [15],

$$\begin{aligned} nU_n^*(t)\rightarrow _{{\mathcal {L}}^*}\sum _{i=1}^{\infty }\lambda _i(t)(N_i^2-1), \quad t\in T. \end{aligned}$$

We complete the proof. \(\square \)

Proof of Theorem 3.3

Recall that

$$\begin{aligned}&\Phi _{\rho }(x, x^{\prime })=\rho (x, x^{\prime })-E[\rho (x, X^{\prime })],\\&\Psi (y, y^{\prime })=\langle y, y^{\prime }\rangle - E\langle Y, y^{\prime }\rangle -E\langle y, Y^{\prime }\rangle + E\langle Y, Y^{\prime }\rangle . \end{aligned}$$

Denote \(\phi _{ij}=\Phi _{\rho }(X_i, X_j)\), \(\psi _{ij}=\Psi (Y_i, Y_j)\) and \( {\mathcal {G}}_n^*(t)=-\frac{1}{n-1}\sum _{i\ne j}^{n}\varepsilon _i\phi _{ij}\psi _{ij}c_{ij}\varepsilon _j\). Our proof follows from the two steps:

  1. (i)

    \( {\mathcal {K}}_n^*(t) = {\mathcal {G}}_n^*(t)+o^*_{p}(1) ~~a.s.; \)

  2. (ii)

    \({\mathcal {G}}^{*}_n(t)\rightarrow _{{\mathcal {L}}^*} {\mathcal {K}}(t), \ t\in [0, 1]\).

We first prove (i). It suffices to verify that

$$\begin{aligned}&\textrm{var}^*\left[ -\frac{1}{n-1}\sum _{i\ne j}^{n}\varepsilon _i\left\{ {\tilde{A}}_{ij}{\tilde{B}}_{ij} + R_{ij} -\phi _{ij}\psi _{ij}\right\} c_{ij}\varepsilon _j\right] \\&\quad =\frac{1}{(n-1)^2}\sum _{i\ne j}^{n}\left[ \left\{ {\tilde{A}}_{ij}{\tilde{B}}_{ij} + R_{ij} -\phi _{ij}\psi _{ij}\right\} c_{ij}\right] ^2\xrightarrow {a.s.} 0. \end{aligned}$$

Using the same arguments in the proof of Theorem 5 of Lee et al. [7], the above assertion can be obtained. Since the details is almost the same, we omit the verification here.

Next we prove (ii). By Theorem 3.2, it suffices to show that \( {\mathcal {G}} ^*_n(t)\) is equicontinuous in probability (with respect to Euclidean metric on [0, 1]). By Markov inequality, it suffices to have

$$\begin{aligned} E_{\varepsilon }\left( \sup _{ \vert s-t \vert <\delta } \vert {\mathcal {G}}^*_n(s)-{\mathcal {G}}^*_n(t) \vert ^2\right) \rightarrow 0, a.s. \end{aligned}$$

as \( \delta \rightarrow 0 \), here the expectation is taken over \(\varepsilon _i\) given \( W_1, W_2, \dots \). To this end, note that, for every \( s, t\in [0, 1] \),

$$\begin{aligned} E_{\varepsilon } \vert {\mathcal {G}}^*_n(s)-{\mathcal {G}}^*_n(t) \vert ^2&=\frac{2}{(n-1)^2}\sum _{i\ne j}\phi ^2_{ij}\psi ^2_{ij}[{\tilde{C}}_{ij}(s)-{\tilde{C}}_{ij}(t)]^2\\&\xrightarrow [ ]{a.s.} 2E\left\{ \Phi ^2(X_1, X_2)\Psi ^2(Y_1, Y_2)[L_{X_1, X_2}(s)-L_{X_1, X_2}(s)]^2\right\} , \end{aligned}$$

by the law of large number for \(\textrm{U}\)-statistics, provided that

$$\begin{aligned} E\left\{ \Phi ^2(X_1, X_2)\Psi ^2(Y_1, Y_2)[L_{X_1, X_2}(s)-L_{X_1, X_2}(t)]^2\right\} <\infty . \end{aligned}$$
(6.7)

Using the similar argument of the proof of Theorem 2.6, under the conditions of the theorem,

$$\begin{aligned} 2E\left\{ \Phi ^2(X_1, X_2)\Psi ^2(Y_1, Y_2)[L_{X_1, X_2}(s)-L_{X_1, X_2}(t)]^2\right\} \rightarrow 0, \text { as } \vert s-t \vert \rightarrow 0. \end{aligned}$$

And the condition (6.7) can be verified easily by similar argument as the previous proofs. An application of the maximal inequality (see Theorem 2.2.4 of Vaart and Wellner [15]) gives (ii).

By (i) and (ii), \({\mathcal {K}}^{*}_n(t)\rightarrow _{{\mathcal {L}}^*} {\mathcal {K}}(t), t\in [0, 1]\). The second assertion follows by continuous mapping theorem. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lai, T., Zhang, Z. Local Influence Detection of Conditional Mean Dependence. Commun. Math. Stat. (2023). https://doi.org/10.1007/s40304-023-00365-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40304-023-00365-3

Keywords

Mathematics Subject Classification

Navigation