Abstract
We consider a general nonparametric regression model called the compound model. It includes, as special cases, sparse additive regression and nonparametric (or linear) regression with many covariates but possibly a small number of relevant covariates. The compound model is characterized by three main parameters: the structure parameter describing the “macroscopic” form of the compound function, the “microscopic” sparsity parameter indicating the maximal number of relevant covariates in each component and the usual smoothness parameter corresponding to the complexity of the members of the compound. We find non-asymptotic minimax rate of convergence of estimators in such a model as a function of these three parameters. We also show that this rate can be attained in an adaptive way.
Similar content being viewed by others
1 Introduction
High dimensional statistical inference has known a tremendous development over the past 10 years motivated by applications in various fields such as bioinformatics, computer vision, financial engineering. The most intensively investigated models in the context of high-dimensionality are the (generalized) linear models, for which efficient procedures are well known and the theoretical properties are well understood (cf., for instance, [3, 10, 11, 29]). More recently, increasing interest is demonstrated for studying non-linear models in high-dimensional setting [6, 12, 16, 21, 27] under various types of sparsity assumption. The present paper introduces a general framework that unifies these studies and describes the theoretical limits of statistical procedures in high-dimensional non-linear problems.
In order to reduce the technicalities and focus on the main ideas, we consider the Gaussian white noise model, which is known to be asymptotically equivalent, under some natural conditions, to the model of regression [5, 22], as well as to other nonparametric models [9, 13]. Thus, we assume that we observe a real-valued Gaussian process \({\varvec{Y}}=\{Y(\phi ):\phi \in L^2([0,1]^d)\}\) such that
for all \(\phi ,\phi ^{\prime } \in L^2([0,1]^d)\), where \(f\) is an unknown function in \(L^2([0,1]^d), \mathbf{E}_{f}\) and \(\mathbf{Cov}_{f}\) are the expectation and covariance signs, and \(\varepsilon \) is some positive number. It is well known that these two properties uniquely characterize the probability distribution of a Gaussian process that we will further denote by \(\mathbf{P}\!_f\) (respectively, by \(\mathbf{P}\!_0\) if \(f\equiv 0\)). Alternatively, \({\varvec{Y}}\) can be considered as a trajectory of the process
where \(W({\varvec{x}})\) is a \(d\)-parameter Brownian sheet. The parameter \(\varepsilon \) is assumed known; in the model of regression it corresponds to the quantity \(\sigma ^2 n^{-1/2}\), where \(\sigma ^2\) is the variance of noise. Without loss of generality, we assume in what follows that \(0<\varepsilon <1\).
1.1 Notation
First, we introduce some notation. Vectors in finite-dimensional spaces and infinite sequences will be denoted by boldface letters, vector norms will be denoted by \(|\cdot |\) while function norms will be denoted by \(\Vert \cdot \Vert \). Thus, for \(\mathbf{v}=(v_1,\dots ,v_d)\in \mathbb{R }^d\) we set
whereas for a function \(f:[0,1]^d\rightarrow \mathbb{R }\) we set
We denote by \(L^2_0([0,1]^d)\) the subspace of \(L^2([0,1]^d)\) containing all the functions \(f\) such that \(\int _{[0,1]^d} f({\varvec{x}})\,d{\varvec{x}}=0\). The notation \(\langle \cdot ,\cdot \rangle \) will be used for the inner product in \(L^2([0,1]^d)\), that is \(\langle h,\tilde{h}\rangle =\int _{[0,1]^d} h({\varvec{x}})\tilde{h}({\varvec{x}})\,d{\varvec{x}}\) for any \(h,\tilde{h}\in L^2([0,1]^d)\). For two integers \(a\) and \(a^{\prime }\), we denote by \({[\![}a,a^{\prime }{]\!]}\) the set of all integers belonging to the interval \([a,a^{\prime }]\). We denote by \([t]\) the integer part of a real number \(t\). For a finite set \(V\), we denote by \(|V|\) its cardinality. For a vector \({\varvec{x}}\in \mathbb{R }^d\) and a set of indices \(V\subseteq \{ 1,\dots ,d\}\), the vector \({\varvec{x}}_V\in \mathbb{R }^{|V|}\) is defined as the restriction of \({\varvec{x}}\) to the coordinates with indices belonging to \(V\). For every \(s\in \{ 1,\dots , d\}\) and \(m\in \mathbb{N }\), we define \(\mathcal{V }_s^d=\big \{V\subseteq \{ 1,\dots ,d\}:|V|\le s\big \}\) and the set of binary vectors \(\mathcal{B }_{s,m}^d=\big \{{\varvec{\eta }}\in \{0,1\}^{\mathcal{V }_s^d}:|{\varvec{\eta }}|_0=m\big \}\). We also use the notation \(M_{d,s}\triangleq |\mathcal{V }_s^d|\). We extend these definitions to \(s=0\) by setting \(\mathcal{V }_0^d=\{\emptyset \}, M_{d,0}=1, |\mathcal{B }_{0,1}^d|=1\), and \(|\mathcal{B }_{0,m}^d|=0\) for \(m>1\). For a vector \(\varvec{a}\), we denote by \(\mathop {\text{ supp}}(\varvec{a})\) the set of indices of its non-zero coordinates. In particular, the support \(\mathop {\text{ supp}}({\varvec{\eta }})\) of a binary vector \({\varvec{\eta }}=\{\eta _V\}_{V\in \mathcal{V }_s^d} \in \mathcal{B }_{s,m}^d\) is the set of \(V\)’s such that \(\eta _V=1\).
1.2 Compound functional model
In this paper we impose the following assumption on the unknown function \(f\).
Compound functional model There exists an integer \(s\in \{ 1,\dots ,d\}\), a binary sequence \({\varvec{\eta }}\in \mathcal{B }_{s,m}^d\), a set of functions \(\{f_V\in L^2_0([0,1]^{|V|})\}_{V\in \mathcal{V }_s^d}\) and a constant \(\bar{f}\) such that
The functions \(f_V\) are called the atoms of the compound model.
Note that, under the compound model, \(\bar{f}=\int _{[0,1]^d} f({\varvec{x}})\,d{\varvec{x}}\).
The atoms \(f_V\) are assumed to be sufficiently regular, namely, each \(f_V\) is an element of a suitable functional class \(\Sigma _V\). In particular, one can consider a smoothness class \(\Sigma _V\) and more specifically the Sobolev ball of functions of \(s\) variables.Footnote 1 In what follows, we will mainly deal with this example.
Given a collection \({\varvec{\Sigma }}=\{\Sigma _V\}_{V\in \mathcal{V }_s^d}\) of subsets of \(L^2_0([0,1]^s)\) and a subset \(\tilde{\mathcal{B }}\) of \(\mathcal{B }_{s,m}^d\), we define the classes
where
The class \(\mathcal{F }_{s,m}({\varvec{\Sigma }})\) is defined for any \(s\in \{ 0,\dots ,d\}\) and any \(m \in \{0,\dots , M_{d,s}\}\). In what follows, we assume that \(\tilde{\mathcal{B }}\) is fixed and for this reason we do not include it in the notation. Examples of \(\tilde{\mathcal{B }}\) can be the set of all \({\varvec{\eta }}\in \mathcal{B }_{s,m}^d\) such that \(V\in \mathop {\text{ supp}}({\varvec{\eta }})\) are pairwise disjoint or of all \({\varvec{\eta }}\in \mathcal{B }_{s,m}^d\) such that every set \(V\) from \(\mathop {\text{ supp}}({\varvec{\eta }})\) has a non-empty intersection with at most one other set from \(\mathop {\text{ supp}}({\varvec{\eta }})\).
It is clear from the definition that the parameters \(\big ({\varvec{\eta }},\{f_V\}_{V\in \mathop {\text{ supp}}({\varvec{\eta }})}\big )\) are not identifiable. Indeed, two different collections \(\big ({\varvec{\eta }},\{f_V\}_{V\in \mathop {\text{ supp}}({\varvec{\eta }})}\big )\) and \(\big (\bar{\varvec{\eta }},\{\bar{f}_V\}_{V\in \mathop {\text{ supp}}(\bar{\varvec{\eta }})}\big )\) may lead to the same compound function \(f\). Of course, this is not necessarily an issue as long as only the problem of estimating \(f\) is considered.
We now define the Sobolev classes of functions of many variables that will play the role of \(\Sigma _V\). Consider an orthonormal system of functions \(\{\varphi _{\varvec{j}}\}_{{\varvec{j}}\in \mathbb{Z }^d}\) in \( L^2([0,1]^d)\) such that \(\varphi _{\varvec{0}}({\varvec{x}})\equiv 1\). We assume that the system \(\{\varphi _{\varvec{j}}\}\) and the set \(\tilde{\mathcal{B }}\) are such that
for all \({\varvec{\eta }}\in \tilde{\mathcal{B }}\) and all square-summable arrays \((\theta _{{\varvec{j}},V}, \,({\varvec{j}},V)\in \mathbb{Z }^d\times \mathcal{V }_s^d)\), where \(C_*>0\) is a constant independent of \(s,m\) and \(d\). For example, this condition holds with \(C_*=1\) if \(\tilde{\mathcal{B }}\) is the set of all \({\varvec{\eta }}\in \mathcal{B }_{s,m}^d\) such that \(V\in \mathop {\text{ supp}}({\varvec{\eta }})\) are pairwise disjoint and with \(C_*=2\) if \(\tilde{\mathcal{B }}\) is the set of all \({\varvec{\eta }}\in \mathcal{B }_{s,m}^d\) such that every set \(V\) from \(\mathop {\text{ supp}}({\varvec{\eta }})\) has a non-empty intersection with at most one other set from \(\mathop {\text{ supp}}({\varvec{\eta }})\).
One example of \(\{\varphi _{\varvec{j}}\}_{{\varvec{j}}\in \mathbb{Z }^d}\) is a tensor product orthonormal basis:
where \({\varvec{j}}=(j_1,\dots ,j_d)\in \mathbb{Z }^d\) is a multi-index and \(\{\varphi _{k}\}, \,k\in \mathbb{Z }\), is an orthonormal basis in \( L^2([0,1])\). Specifically, we can take the trigonometric basis with \(\varphi _{0}(u)\equiv 1\) on \([0,1], \varphi _{k}(u)=\sqrt{2}\cos (2\pi \,k u)\) for \(k>0\) and \(\varphi _{k}(u)=\sqrt{2}\sin (2\pi \,k u)\) for \(k<0\). To ease notation, we set \(\theta _{\varvec{j}}[{f}]=\langle {f},\varphi _{\varvec{j}}\rangle \) for \({\varvec{j}}\in \mathbb{Z }^d\).
For any set of indices \(V\subseteq \{1,\dots ,d\}\) and any \(\beta >0, L>0\), we define the Sobolev class of functions
Assuming that \(\{\varphi _{\varvec{j}}\}\) is the trigonometric basis and \(f\) is periodic with period one in each coordinate, i.e., \(f({\varvec{x}}+{\varvec{j}})=f({\varvec{x}})\) for every \({\varvec{x}}\in \mathbb{R }^d\) and every \({\varvec{j}}\in \mathbb{Z }^d\), the condition \(f_V\in W_{V}(\beta ,L)\) can be interpreted as the square integrability of all partial derivatives of \(f_V\) up to the order \(\beta \).
Let us give some examples of compound models.
-
Additive models are the special case \(s=1\) of compound models. Here, additive models are understood in a wider sense than originally defined by [26]. Namely, for \(s=1\) we have the model
$$\begin{aligned} f({\varvec{x}})=\bar{f}+\sum _{j\in J} f_j(x_j), \qquad {\varvec{x}}=(x_1,\dots ,x_d)\in \mathbb{R }^d, \end{aligned}$$where \(J\) is any (unknown) subset of indices and not necessarily \(J=\{1,\dots , d\}\). Estimation and testing problems in this model when the atoms belong to some smoothness classes have been studied in [12, 14, 16, 19, 21, 27].
-
Single atom models are the special case \(m=1\) of compound models. If \(m=1\) we have \(f({\varvec{x}})=f_V({\varvec{x}}_V)\) for some unknown \(V\subseteq \{ 1,\dots ,d\}\), i.e., there exists only one set \(V\) for which \(\eta _V=1\), and \(|V|\le s\). Estimation and variable selection in this model were considered by [2, 7, 25]. The case of small \(s\) and large \(d\) is particularly interesting in the context of sparsity. In a parametric model, when \(f_V\) is a linear function, we are back to the sparse high-dimensional linear regression setting, which has been extensively studied, see, e.g., [29].
-
Tensor product models Let \(\mathcal{A }\) be a given finite subset of \(\mathbb{Z }\), and assume that \(\varphi _{\varvec{j}}\) is a tensor product basis defined by (3). Consider the following parametric class of functions
$$\begin{aligned} {\varvec{T}}_{{\varvec{\eta }}}(\mathcal{A })&= \left\{ f:\mathbb{R }^d\rightarrow \mathbb{R }: \exists \bar{f} , \{\theta _{{\varvec{j}},V}\}, \text{ such} \text{ that} f\right.\nonumber \\&= \left.\bar{f}+\sum _{V\in \mathop {\text{ supp}}({\varvec{\eta }})}\sum _{{\varvec{j}}\in \mathcal{J }_{V,\mathcal{A }}} \theta _{{\varvec{j}},V}\varphi _{\varvec{j}}\right\} , \end{aligned}$$(5)where
$$\begin{aligned} \mathcal{J }_{V,\mathcal{A }}=\Big \{ {\varvec{j}}\in \mathcal{A }^d : \mathop {\text{ supp}}({\varvec{j}}) \subseteq V \Big \}. \end{aligned}$$(6)We say that function \(f\) satisfies the tensor product model if it belongs to the set \({\varvec{T}}_{\varvec{\eta }}(\mathcal{A })\) for some \({\varvec{\eta }}\in \tilde{\mathcal{B }}\). We define
$$\begin{aligned} \mathcal{F }_{s,m}({\varvec{T}}_{\!\!\mathcal{A }})=\bigcup \limits _{{\varvec{\eta }}\in \tilde{\mathcal{B }}} {\varvec{T}}_{\varvec{\eta }}(\mathcal{A }). \end{aligned}$$Important examples are sparse high-dimensional multilinear/polynomial systems. Motivated respectively by applications in genetics and signal processing, they have been recently studied by [20] in the context of compressed sensing without noise and by [15] in the case where the observations are corrupted by a Gaussian noise. With our notation, the models they considered are the tensor product models with \(\mathcal{A }=\{0,1\}\) (linear basis functions \(\varphi _j\)) in the multilinear model of [20] and \(\mathcal{A }=\{-1,0,1\}\) in the Volterra filtering problem of [15] (second-order Volterra systems with \(\varphi _0(x)\equiv 1, \varphi _1(x)\propto (x-1/2)\) and \(\varphi _{-1}(x)\propto x^2-x+1/6\)). More generally, the set \(\mathcal{A }\) should be of small cardinality to guarantee efficient dimension reduction. Another approach is to introduce hierarchical structures on the coefficients of tensor product representation [1, 4].
In what follows, we assume that \(f\) belongs to the functional class \(\mathcal{F }_{s,m}({\varvec{\Sigma }})\) where either \({\varvec{\Sigma }}=\{W_{V}(\beta ,L)\}_{V\in \mathcal{V }_s^d}\triangleq \varvec{W}(\beta ,L)\) or \({\varvec{\Sigma }}={\varvec{T}}_{\!\!\mathcal{A }}\).
The compound model is described by three main parameters, which are the dimension \(m\) that we call the macroscopic parameter and that characterizes the complexity of possible structure vectors \({\varvec{\eta }}\), the dimension \(s\) of atoms in the compound that we call the microscopic parameter, and the complexity of functional class \({\varvec{\Sigma }}\). The latter can be described by entropy numbers of \({\varvec{\Sigma }}\) in convenient norms, and in the particular case of Sobolev classes, it is naturally characterized by the smoothness parameter \(\beta \). The integers \(m\) and \(s\) are “effective dimension” parameters. As soon as they grow, the structure becomes less pronounced and the compound model approaches the global nonparametric regression in dimension \(d\), which is known to suffer from the curse of dimensionality already for moderate \(d\). Therefore, an interesting case is the sparsity scenario where \(s\) and/or \(m\) are small.
2 Overview of the results and relation to the previous work
Several statistical problems arise naturally in the context of compound functional model.
-
Estimation of \(f\) This is the subject of the present paper. We measure the risk of arbitrary estimator \(\widetilde{f}_\varepsilon \) by its mean integrated squared error \(\mathbf{E}_f[\Vert \widetilde{f}_\varepsilon -f\Vert _2^2]\) and we study the minimax risk
$$\begin{aligned} \inf _{\widetilde{f}_\varepsilon }\sup _{f\in \mathcal{F }_{s,m}({\varvec{\Sigma }})} \mathbf{E}_f[\Vert \widetilde{f}_\varepsilon -f\Vert _2^2], \end{aligned}$$where \(\inf _{\widetilde{f}_\varepsilon }\) denotes the minimum over all estimators.Footnote 2 A first general question is to establish the minimax rates of estimation, i.e., to find values \(\psi _{s,m,\varepsilon }({\varvec{\Sigma }})\) such that
$$\begin{aligned} \inf _{\widetilde{f}_\varepsilon }\sup _{f\in \mathcal{F }_{s,m}({\varvec{\Sigma }})} \mathbf{E}_f[\Vert \widetilde{f}_\varepsilon -f\Vert _2^2] \asymp \psi _{s,m,\varepsilon }({\varvec{\Sigma }}), \end{aligned}$$when \({\varvec{\Sigma }}\) is a Sobolev, Hölder or other class of functions. A second question is to construct optimal estimators in a minimax sense, i.e., estimators \(\widehat{f}_\varepsilon \) such that
$$\begin{aligned} \sup _{f\in \mathcal{F }_{s,m}({\varvec{\Sigma }})} \mathbf{E}_f[\Vert \widehat{f}_\varepsilon -f\Vert _2^2] \le C \psi _{s,m,\varepsilon }({\varvec{\Sigma }}), \end{aligned}$$(7)for some constant \(C\) independent of \(s,m,\varepsilon \) and \({\varvec{\Sigma }}\). Some results on minimax rates of estimation of \(f\) are available only for the case \(s=1\) (cf. the discussion below). Finally, a third question that we address here is whether the optimal rate can be attained adaptively, i.e., whether one can construct an estimator \(\widehat{f}_\varepsilon \) that satisfies (7) simultaneously for all \(s,m,\beta \) and \(L\) when \({\varvec{\Sigma }}=\varvec{W}(\beta ,L)\). We will show that the answer to this question is positive.
-
Variable selection Assume that \(m=1\). This means that \(f({\varvec{x}})=f_V({\varvec{x}}_V)\) for some unknown \(V\subseteq \{ 1,\dots ,d\}\), i.e., there exists only one set \(V\) for which \(\eta _V=1\) (a single atom model). Then it is of interest to identify \(V\) under the constraint \(|V|\le s\). In particular, \(d\) can be very large while \(s\) can be small. This corresponds to estimating the relevant covariates and generalizes the problem of selection of sparsity pattern in linear regression. An estimator \(\widehat{V}_n\subseteq \{ 1,\dots ,d\}\) of \(V\) is considered as good, if the probability \(\mathbf{P}(\widehat{V}_n=V)\) is close to one.
-
Hypotheses testing (detection) The problem is to test the hypothesis \(H_0: f\equiv 0\) (no signal) against the alternative \(H_1 : f\in \mathcal{A }\), where \(\mathcal{A }=\big \{f\in \mathcal{F }_{s,m}({\varvec{\Sigma }}): \Vert f\Vert _2\ge r\big \}\). Here, it is interesting to characterize the minimax rates of separation \(r>0\) in terms of \(s, m\) and \({\varvec{\Sigma }}\).
Some of the above three problems have been studied in the literature for special cases \(s=1\) (additive model) and \(m=1\) (single atom model). Ingster and Lepski [14] studied the problem of testing in additive model and provided asymptotic minimax rates of separation. Sharp asymptotic optimality under additional assumptions in the same problem was obtained by Gayraud and Ingster [12]. Recently, Comminges and Dalalyan [7] established tight conditions for variable selection in the single atom model. We also mention an earlier work of Bertin and Lecué [2] dealing with variable selection.
The problem of estimation has been also considered for additive model and class \({\varvec{\Sigma }}\) defined as a reproducing kernel Hilbert space, cf. Koltchinskii and Yuan [16], Raskutti et al. [21]. In particular, these papers showed that if \(s=1\) and \({\varvec{\Sigma }}=\varvec{W}(\beta ,L)\) is a Sobolev class, then there is an estimator of \(f\) for which the mean integrated squared error converges to zero at the rate
Furthermore, Raskutti et al. [21, Thm. 2] provided the following lower bound on the minimax risk:
Note that when \(m\) is proportional to \(d\), this lower bound departs from the upper bound in a logarithmic way. It should also be noted that the upper bounds in these papers are achieved by estimators that are not adaptive in the sense that they require the knowledge of the smoothness index \(\beta \).
In this paper, we establish non-asymptotic upper and lower bounds on the minimax risk for the model with Sobolev smoothness class \({\varvec{\Sigma }}=\varvec{W}(\beta ,L)\). We will prove that, up to a multiplicative constant, the minimax risk behaves itself as
(we assume here \(d/(sm^{1/s})>1\), otherwise a constant factor \(>\)1 should be inserted under the logarithm, cf. the results below). In addition, we demonstrate that this rate can be reached in an adaptive way that is without the knowledge of \(\beta , s\), and \(m\). The rate (10) is non-asymptotic, which explains, in particular, the presence of the minimum with constant \(L\) in (10). For \(s=1\), i.e., for the additive regression model, our rate matches the lower bound of Raskutti et al. [21].
For \(m=1\), i.e., when \(f({\varvec{x}})=f_V({\varvec{x}}_V)\) for some unknown \(V\subseteq \{ 1,\dots ,d\}\) (the single atom model), the minimax rate of convergence takes the form
This rate accounts for two effects, namely, the accuracy of nonparametric estimation of \(f\) for fixed macroscopic structure parameter \({\varvec{\eta }}\), cf. the first term \(\sim \varepsilon ^{4\beta /(2\beta +s)}\), and the complexity of the structure itself (irrespective of the nonparametric nature of microscopic components \(f_V({\varvec{x}}_V)\)). In particular, the second term \(\sim s\varepsilon ^2\log ({d}/{s})\) in (11) coincides with the optimal rate of prediction in linear regression model under the standard sparsity assumption. This is what we obtain in the limiting case when \(\beta \) tends to infinity. It is important to note that the optimal rates depend only logarithmically on the ambient dimension \(d\). Thus, even if \(d\) is large, the rate optimal estimators achieve nice performance under the sparsity scenario when \(s\) and \(m\) are small.
3 The estimator and upper bounds on the minimax risk
In this section, we suggest an estimator attaining the minimax rate. It is constructed in the following two steps.
-
Constructing weak estimators At this step, we proceed as if the macroscopic structure parameter \({\varvec{\eta }}\) was known and denote by \(V_1,\ldots ,V_m\) the elements of the support of \({\varvec{\eta }}\). The goal is to provide for each \({\varvec{\eta }}\) a family of “simple” estimators of \(f\)—indexed by some parameter \({\varvec{t}}\)—containing a rate-minimax one. To this end, we first project \({\varvec{Y}}\) onto the basis functions \(\{\varphi _{\varvec{j}}:{|{\varvec{j}}|}_\infty \le \varepsilon ^{-2}\}\) and denote
$$\begin{aligned} {\varvec{Y}}_\varepsilon =(Y_{\varvec{j}}\triangleq Y(\varphi _{\varvec{j}}): {\varvec{j}}\in \mathbb{Z }^d, \, {|{\varvec{j}}|}_\infty \le \varepsilon ^{-2}). \end{aligned}$$(12)Then, we consider a collection \(\{\widehat{\varvec{\theta }}_{{\varvec{t}},{\varvec{\eta }}}:\ {\varvec{t}}\in \mathbb{Z }^m\cap [1,\varepsilon ^{-2}]^m\}\) of projection estimators of the vector \({\varvec{\theta }}_\varepsilon =(\theta _{\varvec{j}}[f])_{{\varvec{j}}\in \mathbb{Z }^d: |{\varvec{j}}|_\infty \le \varepsilon ^{-2}}\). The role of each component \(t_\ell \) of \({\varvec{t}}\) is to indicate the cut-off level of the coefficients \(\theta _{{\varvec{j}}}\) corresponding to the atom \(f_{V_\ell }\), that is the level of indices beyond of which the coefficients are estimated by \(0\). To be more precise, for an integer-valued vector \({\varvec{t}}=(t_{V_\ell }, \ell =1,\dots , m)\in [0, \varepsilon ^{-2}]^m\) we set \(\widehat{\varvec{\theta }}_{{\varvec{t}},{\varvec{\eta }}}=(\widehat{\theta }_{{\varvec{t}},{\varvec{\eta }},{\varvec{j}}}:{\varvec{j}}\in \mathbb{Z }^d, \,{|{\varvec{j}}|}_\infty \le \varepsilon ^{-2})\), where \(\widehat{\theta }_{{\varvec{t}},{\varvec{\eta }},\varvec{0}}=Y_{\varvec{0}}\) and
$$\begin{aligned} \widehat{\theta }_{{\varvec{t}},{\varvec{\eta }},{\varvec{j}}}= {\left\{ \begin{array}{ll} Y_{\varvec{j}},&\exists \ell \, \text{ s.} \text{ t.} \mathop {\text{ supp}}({\varvec{j}})\subseteq V_\ell \, , \, {|{{\varvec{j}}}|}_\infty \in [1, t_{V_\ell }], \\ 0,&\text{ otherwise} \end{array}\right.} \end{aligned}$$if \({\varvec{j}}\ne \varvec{0}\). Based on these estimators of the coefficients of \(f\), we recover the function \(f\) using the estimator
$$\begin{aligned} \widehat{f}_{{\varvec{t}},{\varvec{\eta }}}({\varvec{x}})=\sum \limits _{{\varvec{j}}\in \mathbb{Z }^d:{|{\varvec{j}}|}_\infty \le \varepsilon ^{-2}} \widehat{\theta }_{{\varvec{t}},{\varvec{\eta }},{\varvec{j}}} \varphi _{\varvec{j}}({\varvec{x}}). \end{aligned}$$ -
Smoothness- and structure-adaptive estimation The goal in this step is to combine the weak estimators \(\{\widehat{f}_{{\varvec{t}},{\varvec{\eta }}}\}_{{\varvec{t}},{\varvec{\eta }}}\) in order to get a structure and smoothness adaptive estimator of \(f\) with a risk which is as small as possible. To this end, we use a version of exponentially weighted aggregate [10, 11, 18] in the spirit of sparsity pattern aggregation as described in [23, 24]. More precisely, for every pair of integers \((s,m)\) such that \(s\in \{1,\ldots ,d\}\) and \(m\in \{1,\ldots ,M_{d,s}\}\), we define prior probabilities for \(({\varvec{t}},{\varvec{\eta }})\in {[\![}0,\varepsilon ^{-2}{]\!]}^m\times (\mathcal{B }_{s,m}^d\setminus \mathcal{B }_{s-1,m}^d)\) by
$$\begin{aligned} \pi _{{\varvec{t}},{\varvec{\eta }}}=\frac{2^{-sm}}{H_d(1+[\varepsilon ^{-2}])^m|\mathcal{B }_{s,m}^d\setminus \mathcal{B }_{s-1,m}^d|},\qquad H_d=\sum _{s=0}^d\sum _{m=1}^{M_{d,s}} 2^{-sm}\le e. \end{aligned}$$(13)For \(s=0\) and the unique \({\varvec{\eta }}_0\in \mathcal{B }_{0,1}^d\) we consider only one weak estimator \(\widehat{\varvec{\theta }}_{{\varvec{t}},{\varvec{\eta }}_0}\) with all entries zero except for the entry \(\widehat{\theta }_{{\varvec{t}},{\varvec{\eta }}_0,\varvec{0}}\), which is equal to \(Y_{\varvec{0}}\). We set \(\pi _{{\varvec{t}},{\varvec{\eta }}_0}=1/H_d\). It is easy to see that \({\varvec{\pi }}=\Big (\pi _{{\varvec{t}},{\varvec{\eta }}};({\varvec{t}},{\varvec{\eta }})\in \bigcup _{s,m} \{{[\![}0,\varepsilon ^{-2}{]\!]}^m\times \mathcal{B }_{s,m}^d\}\Big )\) defines a probability distribution. For any pair \(({\varvec{t}},{\varvec{\eta }})\) we introduce the penalty function
$$\begin{aligned} \text{ pen}({\varvec{t}},{\varvec{\eta }})=2\varepsilon ^2\prod _{V\in \mathop {\text{ supp}}({\varvec{\eta }})} (2t_V+1)^{|V|} \end{aligned}$$and define the vector of coefficients \(\widehat{\varvec{\theta }}_\varepsilon =(\widehat{\theta }_{\varepsilon ,{\varvec{j}}}:{\varvec{j}}\in \mathbb{Z }^d, \, {|{\varvec{j}}|}_\infty \le \varepsilon ^{-2})\) by
$$\begin{aligned} \widehat{\varvec{\theta }}_\varepsilon =\sum _{s=1}^d\sum _{m=1}^{M_{d,s}}\sum _{({\varvec{t}},{\varvec{\eta }})} \widehat{\varvec{\theta }}_{{\varvec{t}},{\varvec{\eta }}} \frac{\exp \big \{-\frac{1}{4\varepsilon ^2}\big (|{\varvec{Y}}_\varepsilon -\widehat{\varvec{\theta }}_{{\varvec{t}},{\varvec{\eta }}}|_2^2+\text{ pen}({\varvec{t}},{\varvec{\eta }})\big )\big \}\pi _{{\varvec{t}},{\varvec{\eta }}}}{\sum _{\bar{s}=1}^d\sum _{\bar{m}=1}^{M_{d,\bar{s}}}\sum _{(\bar{\varvec{t}},\bar{\varvec{\eta }})}\exp \big \{-\frac{1}{4\varepsilon ^2} \big (|{\varvec{Y}}_\varepsilon -\widehat{\varvec{\theta }}_{\bar{\varvec{t}},\bar{\varvec{\eta }}}|_2^2+\text{ pen}(\bar{\varvec{t}},\bar{\varvec{\eta }})\big )\big \}\pi _{\bar{\varvec{t}},\bar{\varvec{\eta }}}}, \end{aligned}$$(14)where the summations \(\sum _{({\varvec{t}},{\varvec{\eta }})}\) and \(\sum _{(\bar{\varvec{t}},\bar{\varvec{\eta }})}\) correspond to \(({\varvec{t}},{\varvec{\eta }}) \in {[\![}0,\varepsilon ^{-2}{]\!]}^m\times (\mathcal{B }^d_{s,m}\setminus \mathcal{B }^d_{s-1,m})\) and \((\bar{\varvec{t}},\bar{\varvec{\eta }})\in {[\![}0,\varepsilon ^{-2}{]\!]}^{\bar{m}}\times (\mathcal{B }^d_{\bar{s},\bar{m}}\setminus \mathcal{B }^d_{\bar{s}-1,\bar{m}})\), respectively. The final estimator of \(f\) is
$$\begin{aligned} \widehat{f}_\varepsilon ({\varvec{x}})=\sum \limits _{{\varvec{j}}\in \mathbb{Z }^d:{|{\varvec{j}}|}_\infty \le \varepsilon ^{-2}} \widehat{\theta }_{\varepsilon ,{\varvec{j}}}\varphi _{\varvec{j}}({\varvec{x}}), \qquad \forall {\varvec{x}}\in [0,1]^d. \end{aligned}$$
Note that each \(\widehat{\varvec{\theta }}_{{\varvec{t}},{\varvec{\eta }}}\) is a projection estimator of the vector \({\varvec{\theta }}=(\theta _{\varvec{j}}[f])_{{\varvec{j}}\in \mathbb{Z }^d}\). Hence, \(\widehat{f}_\varepsilon \) is a convex combination of projection estimators. We also note that, to construct \(\widehat{f}_\varepsilon \), we only need to know \(\varepsilon \) and \(d\). Therefore, the estimator is adaptive to all other parameters of the model, such as \(s, m\), the parameters that define the class \({\varvec{\Sigma }}\) and the choice of a particular subset \(\tilde{\mathcal{B }}\) of \(\mathcal{B }^d_{s,m}\).
The following theorem gives an upper bound on the risk of the estimator \(\widehat{f}_\varepsilon \) when \({\varvec{\Sigma }}=\varvec{W}(\beta ,L)\).
Theorem 1
Let \(\beta >0\) and \(L>0\) be such that \(\log (\varepsilon ^{-2})\ge (2\beta )^{-1}\log (L), L>\varepsilon ^{2}\log (e\varepsilon ^{-2})^{\frac{2\beta +s}{s}}\). Let \(\tilde{\mathcal{B }}\) be any subset of \(\mathcal{B }^d_{s,m}\). Assume that condition (2) holds. Then, for some constant \(C({\beta })>0\) depending only on \(\beta \) we have
Proof
Since the functions \(\varphi _{\varvec{j}}\) are orthonormal, \({\varvec{Y}}_\varepsilon \) is composed of independent Gaussian random variables with common variance equal to \(\varepsilon ^2\). Thus, the array \({\varvec{Y}}_\varepsilon \) defined by (12) obeys the Gaussian sequence model studied in [18]. Therefore, using Parseval’s theorem and [18, Cor. 6] we obtain that the estimator \(\widehat{f}_\varepsilon \) satisfies, for all \(f\),
where the minimum is taken over all \(({\varvec{t}}, {\varvec{\eta }})\in \bigcup _{s,m} \{{[\![}0,\varepsilon ^{-2}{]\!]}^m\times \mathcal{B }_{s,m}^d\}\). Denote by \({\varvec{\eta }}_0\) the unique element of \(\mathcal{B }_{0,1}^d\) for which \(\mathop {\text{ supp}}({\varvec{\eta }})=\{\emptyset \}\). The corresponding estimator \(\widehat{f}_{{\varvec{t}},{\varvec{\eta }}_0}\) coincides with the constant function equal to \(Y_{\varvec{0}}\) and its risk is bounded by \(\varepsilon ^2+L\) for all \(f\in \mathcal{F }_{s,m}(\varvec{W}(\beta ,L))\). Therefore,
Take now any \(f\in \mathcal{F }_{s,m}(\varvec{W}(\beta ,L))\), and let \({\varvec{\eta }}^*\in \tilde{\mathcal{B }}\subseteq \mathcal{B }_{s,m}^d\) be such that \(f\in \mathcal{F }_{{\varvec{\eta }}^*}(\varvec{W}(\beta ,L))\). Then it follows from (16) that
Note that for all \(d,s\in \mathbb{N }^*\) such that \(s\le d\) we have
Also, we have the following bound on the risk of estimator \(\widehat{f}_{{\varvec{t}},{\varvec{\eta }}}\) for each \({\varvec{\eta }}\in \tilde{\mathcal{B }}\) and for an appropriate choice of the bandwidth parameter \({\varvec{t}}\in {[\![}0,\varepsilon ^{-2}{]\!]}^m\).
Lemma 1
Let \(\beta >0\), \(L\ge \varepsilon ^{2}\) be such that \(\log (\varepsilon ^{-2})\ge (2\beta )^{-1}\log (L)\). Let \({\varvec{t}}\in {[\![}0,\varepsilon ^{-2}{]\!]}^m\) be a vector with integer coordinates \(t_{V_\ell }=[(L/(3^{|V_\ell |}\varepsilon ^2))^{1/(2\beta +|V_\ell |)}\wedge \varepsilon ^{-2}], \ell =1,\dots ,m\). Assume that condition (2) holds. Then
A proof of this lemma is given in the appendix.
Combining (18) with (19) and (20) yields the following upper bound on the risk of \(\widehat{f}_\varepsilon \) :
where \(C_{\beta }>0\) is a constant depending only on \(\beta \). The assumptions of the theorem guarantee that \( \varepsilon ^2\log (2\varepsilon ^{-2})\le L^{\frac{s}{2\beta +s}} \varepsilon ^{\frac{4\beta }{2\beta +s}}\), so that the desired result follows from (17) and the last display.
The behavior of the estimator \(\widehat{f}_\varepsilon \) in the case \({\varvec{\Sigma }}={\varvec{T}}_{\!\!\mathcal{A }}\) is described in the next theorem.
Theorem 2
Assume that \(k=\max \{|\ell |: \, \ell \in \mathcal{A }\}<\varepsilon ^{-2}\). Then
The proof of Theorem 2 follows the same lines as that of Theorem 1. We take \(f\in \mathcal{F }_{s,m}({\varvec{T}}_{\!\!\mathcal{A }})\), and let \({\varvec{\eta }}^*\in \tilde{\mathcal{B }}\subseteq \mathcal{B }_{s,m}^d\) be such that \(f\in \mathcal{F }_{{\varvec{\eta }}^*}({\varvec{T}}_{\!\!\mathcal{A }})\). Let \({\varvec{t}}^*\in \mathbb{R }^m\) be the vector with all coordinates equal to \(k\). Then the same argument as in (18) yields
We can write \(\mathrm{supp}({\varvec{\eta }}^*)=\{V_1,\dots ,V_m\}\) where \(|V_\ell |\le s\). Since the model is parametric, there is no bias term in the expression for the risk on the right hand side of (22) and we have (cf. (29)):
Together with (22), this implies (21).
The bound of Theorem 2 is particularly interesting when \(k\) and \(s\) are small. For the examples of multilinear and polynomial systems [15, 20] we have \(k=1\). We also note that the result is much better than what can be obtained by using the Lasso. Indeed, consider the simplest case of single atom tensor product model (\(m=1\)). Since we do not know \(s\), we need to run the Lasso in the dimension \(p=(2k+1)^d\) and we can only guarantee the rate \(\varepsilon ^2\log p=d \varepsilon ^2\log (2 k+1)\), which is linear in the dimension \(d\). If \(d\) is very large and \(s\ll d\), this is much slower than the rate of Theorem 2.
4 Lower bound
In this section, we prove a minimax lower bound on the risk of any estimator over the class \(\mathcal{F }_{s,m}(\varvec{W}(\beta ,L))\). We will assume that \(\{\varphi _{\varvec{j}}\}\) is the tensor-product trigonometric basis and \(\tilde{\mathcal{B }}= \tilde{\mathcal{B }}_{s,m}^d\) where \(\tilde{\mathcal{B }}_{s,m}^d\) denotes the set of all \({\varvec{\eta }}\in \mathcal{B }_{s,m}^d\) such that the sets \(V\in \mathrm{\mathop {\text{ supp}}}({\varvec{\eta }})\) are disjoint. Then condition (2) holds with equality and \(C_*=1\). We will split the proof into two steps. First, we establish a lower bound on the minimax risk in the case of known structure \({\varvec{\eta }}\), i.e., when \(f\) belongs to the class \(\mathcal{F }_{\varvec{\eta }}(\varvec{W}(\beta ,L))\) for some known parameters \({\varvec{\eta }}\in \tilde{\mathcal{B }}\) and \(\beta ,L>0\). We will show that the minimax risk tends to zero with the rate not faster than \(m\varepsilon ^{4\beta /(2\beta +s)}\). In a second step, we will prove that if \({\varvec{\eta }}\) is unknown, then the minimax rate is bounded from below by \(ms\varepsilon ^2(1+\log (d/(sm^{1/s})))\) if the function \(f\) belongs to \(\mathcal{F }_{\varvec{\eta }}(\Theta )\) for a set \(\Theta \) spanned by the tensor products involving only the functions \(\varphi _1\) and \(\varphi _{-1}\) of various arguments.
4.1 Lower bound for known structure \({\varvec{\eta }}\)
Proposition 1
Let \(\{\varphi _{\varvec{j}}\}\) be the tensor-product trigonometric basis and let \(s,m,d\) be positive integers satisfying \(d\ge sm\). Assume that \(L\ge \varepsilon ^2\). Then there exists an absolute constant \(C>0\) such that
Proof
Without loss of generality assume that \(m = 1\). We will also assume that \(L=1\) (this is without loss of generality as well, since we can replace \(\varepsilon \) by \(\varepsilon /\sqrt{L}\) and by our assumption this quantity is \(<\)1). After a renumbering if needed, we can assume that \({\varvec{\eta }}\) is such that \(\eta _V=1\) for \(V=\{1,\ldots ,s\}\) and \(\eta _V=0\) for \(V\ne \{1,\ldots ,s\}\).
Let \(t\) be an integer not smaller than \(4\). Then, the set \(I\) of all multi-indices \({\varvec{k}}\in \mathbb{Z }^{s}\) satisfying \(|{\varvec{k}}|_\infty \le t\) is of cardinality \(|I|\ge 9\). For any \({\varvec{\omega }}=(\omega _k, k\in I) \in \{0,1\}^I\), we set \(f_{{\varvec{\omega }}}({\varvec{x}})=\gamma \sum _{{\varvec{k}}\in I} \omega _{\varvec{k}}\varphi _{{\varvec{k}}}(x_1,\ldots ,x_s)\), where \(\varphi _{{\varvec{k}}}(x_1,\ldots ,x_s)=\prod _{j=1}^s \varphi _{k_j}(x_j), {\varvec{k}}=(k_1,\dots ,k_s)\), is an element of the tensor-product trigonometric basis and \(\gamma >0\) is a parameter to be chosen later. In view of the orthonormality of the basis functions \(\varphi _{\varvec{k}}\), we have
Therefore, we have \(\sum _{{\varvec{k}}}|{\varvec{k}}|_\infty ^{2\beta }\theta _{\varvec{k}}[f_{\varvec{\omega }}]^2\le t^{2\beta } \Vert f_{\varvec{\omega }}\Vert _2^2\le t^{2\beta }\gamma ^2(2t+1)^s\le \gamma ^2(2t+1)^{2\beta +s}\). Thus, the condition \(\gamma ^2(2t+1)^{2\beta +s}\le 1\) ensures that all the functions \(f_{\varvec{\omega }}\) belong to \(W(\beta ,1)\).
Furthermore, for two vectors \({\varvec{\omega }}, {\varvec{\omega }}^{\prime }\in \{0,1\}^I\) we have \(\Vert f_{\varvec{\omega }}-f_{{\varvec{\omega }}^{\prime }}\Vert _2^2=\gamma ^2|{\varvec{\omega }}-{\varvec{\omega }}^{\prime }|_1\). Note that the entries of the vectors \({\varvec{\omega }},{\varvec{\omega }}^{\prime }\) are either 0 or 1, therefore the \(\ell _1\) distance between these vectors coincides with the Hamming distance. According to the Varshamov-Gilbert lemma [28, Lemma 2.9], there exists a set \(\varOmega \subset \{0,1\}^I\) of cardinality at least \(2^{|I|/8}\) such that it contains the zero element and the pairwise distances \(|{\varvec{\omega }}-{\varvec{\omega }}^{\prime }|_1\) are at least \(|I|/8\) for any pair \({\varvec{\omega }},{\varvec{\omega }}^{\prime }\in \varOmega \).
We can now apply Theorem 2.7 from [28] that asserts that if, for some \(\tau >0\), we have \(\min _{{\varvec{\omega }},{\varvec{\omega }}^{\prime }\in \varOmega } \Vert f_{\varvec{\omega }}-f_{{\varvec{\omega }}^{\prime }}\Vert _2\ge 2\tau >0\), and
where \(\mathcal{K }(\cdot ,\cdot )\) denotes the Kullback-Leibler divergence, then \(\inf _{\widehat{f}}\max _{{\varvec{\omega }}\in \varOmega }\mathbf{E}_{f_{\varvec{\omega }}}[\Vert \widehat{f}-f_{{\varvec{\omega }}}\Vert _2^2]\ge c^{\prime }\tau ^2\) for some absolute constant \(c^{\prime }>0\). In our case, we set \(\tau =\gamma \sqrt{ |I|/32}\). Combining (23) and the fact that the Kullback-Leibler divergence between the Gaussian measures \(\mathbf{P}_{f}\) and \(\mathbf{P}_g\) is given by \(\frac{1}{2}\varepsilon ^{-2}\Vert f-g\Vert _2^2\), we obtain \(\frac{1}{|\varOmega |}\sum _{{\varvec{\omega }}\in \varOmega } \mathcal{K }(\mathbf{P}_{f_{\varvec{\omega }}},\mathbf{P}\!_0)\le \frac{1}{2}{\varepsilon ^{-2}\gamma ^2 |I|}\,\). If \(\gamma ^2\le (\log 2)\varepsilon ^2/64\), then (24) is satisfied and \(\tau ^2 =\gamma ^2(2t+1)^s/32\) is a lower bound on the rate of convergence of the minimax risk.
To finish the proof, it suffices to choose \(t\in \mathbb{N }\) and \(\gamma >0\) satisfying the following three conditions: \(t\ge 4, \gamma ^2\le (2t+1)^{-2\beta -s}\) and \(\gamma ^2\le \varepsilon ^2\log (2)/64\). For the choice \(\gamma ^{-2}=(2t+1)^{2\beta +s}+\varepsilon ^{-2}64/\log (2)\) and\(t=[4\varepsilon ^{-2/(2\beta +s)}]\) all these conditions are satisfied and \(\tau ^2 \ge c_1\varepsilon ^{4\beta /(2\beta +s)}\) for some absolute positive constant \(c_1\).
4.2 Lower bound for unknown structure \({\varvec{\eta }}\)
Proposition 2
Let the assumptions of Proposition 1 be satisfied. Then there exists an absolute constant \(C^{\prime }>0\) such that
Proof
We use again Theorem 2.7 in [28] but with a choice of the finite subset of \(\mathcal{F }_{s,m}(\varvec{W}(\beta ,L))\) different from that of Proposition 1. First, we introduce some additional notation. For every triplet \((m,s,d)\in \mathbb{N }_*^3\) satisfying \(ms\le d\), let \(\mathcal{P }_{s,m}^d\) be the set of collections \(\pi =\{V_1,\ldots ,V_m\}\) such that each \(V_\ell \subseteq \{ 1,\dots ,d\}\) has exactly \(s\) elements and \(V_\ell \)’s are pairwise disjoint. We consider \(\mathcal{P }_{s,m}^d\) as a metric space with the distance \(\rho (\pi ,\pi ^{\prime })= \frac{1}{m}\sum _{\ell =1}^m \mathbf{1}(V_\ell \not \in \{V^{\prime }_1,\ldots ,V^{\prime }_m\})= \frac{|\pi \Delta \pi ^{\prime }|}{2m}\,\), where \(\pi ^{\prime }=\{V_1^{\prime },\ldots ,V_m^{\prime }\}\in \mathcal{P }_{s,m}^d\). It is easy to see that \(\rho (\cdot ,\cdot )\) is a distance bounded by \(1\).
For any \(\vartheta \in (0,1)\), let \(\mathcal{N }^d_{s,m}(\vartheta )\) denote the logarithm of the packing number, i.e., the logarithm of the largest integer \(K\) such that there are \(K\) elements \(\pi ^{(1)},\ldots ,\pi ^{(K)}\) of \(\mathcal{P }_{s,m}^d\) satisfying \(\rho (\pi ^{(k)},\pi ^{(k^{\prime })})\ge \vartheta \). To each \(\pi ^{(k)}\) we associate a family of functions \(\mathcal{U }=\{f_{k,{\varvec{\omega }}}:{\varvec{\omega }}\in \{-1,1\}^{ms}, \,k=1,\dots ,K\}\) defined by
where \(\tau =(1/4)\min \big (\varepsilon \sqrt{ms\log 2+\log K}, \sqrt{L}\big )\) and \(\varphi _{{\varvec{\omega }},V}({\varvec{x}}_V)=\prod _{j\in V} \varphi _{\omega _j}(x_j)\). Using that \(\{\varphi _{{\varvec{j}}}\}\) is the tensor-product trigonometric basis it is easy to see that each \(f_{k,{\varvec{\omega }}}\) belongs to \(\mathcal{F }_{s,m}(\varvec{W}(\beta ,L))\). Next, \(|\mathcal{U }|=2^{ms}K\) and, for any \(f_{k,{\varvec{\omega }}}\in \mathcal{U }\), the Kullback-Leibler divergence between \(\mathbf{P}_{f_{k,{\varvec{\omega }}}}\) and \(\mathbf{P}\!_0\) is equal to \(\mathcal{K }(\mathbf{P}_{f_{k,{\varvec{\omega }}}},\mathbf{P}\!_0)=\frac{1}{2}\varepsilon ^{-2}\Vert f_{k,{\varvec{\omega }}}\Vert _2^2=\frac{\varepsilon ^{-2}\tau ^2}{2} \le \frac{\log |\mathcal{U }|}{16}\). Furthermore, the functions \(f_{k,{\varvec{\omega }}}\) are not too close to each other. Indeed, since \(\{\varphi _{{\varvec{j}}}\}\) is the tensor-product trigonometric basis we get that, for all \(f_{k,{\varvec{\omega }}},f_{k^{\prime },{\varvec{\omega }}^{\prime }}\in \mathcal{U }\),
These remarks and Theorem 2.7 in [28] imply that
for some absolute constant \(c_3>0\). Assume first that \(d<4sm^{1/s}\). Then \(ms\log 2\ge \frac{ms}{5}\log \big (\frac{8d}{sm^{1/s}}\big )\) and the result of the proposition is straightforward. If \(d\ge 4sm^{1/s}\) we fix \(\vartheta =1/8\) and use the following lemma (cf. the Appendix for a proof) to bound \(\log K=\mathcal{N }_{s,m}^d(\vartheta )\) from below.
Lemma 2
If \(d\ge 4sm^{1/s}\) and \(\vartheta \in (0,1/8]\), we have \(\mathcal{N }^d_{s,m}(\vartheta )\ge -m\log ({2es^{1/2}})+\frac{2ms}{3}\log \big (\frac{d}{sm^{1/s}}\big )\).
This yields
It is easy to check that \(s^{-1}\log ({2es^{1/2}})\le 1.7,\) while for \(d\ge 4sm^{1/s}\) we have \(\frac{2}{3}\log \Big (\frac{8d}{sm^{1/s}}\Big )\ge 2.3.\) Combining these inequalities with (25) and (26) we get the result.
5 Discussion and outlook
We presented a new framework, called the compound functional model, for performing various statistical tasks such as prediction, estimation and testing in the context of high dimension. We studied the problem of estimation in this model from a minimax point of view when the data are generated by a Gaussian process. We established upper and lower bounds on the minimax risk that match up to a multiplicative constant. These bounds are nonasymptotic and are attained adaptively with respect to the macroscopic and microscopic sparsity parameters \(m\) and \(s\), as well as to the complexity of the atoms of the model. In particular, we improve in several aspects upon the existing results for the sparse additive model, which is a special case of the compound functional model (only for this case the rates were previously explicitly treated in the literature):
-
The exact expression for the optimal rate that we obtain reveals that the existing methods for the sparse additive model based on penalized least squares techniques have logarithmically suboptimal rates.
-
Unlike most previous work, we do not require restricted isometry type assumptions on the subspaces of the additive model; we need only a much weaker one-sided condition (2). Possible extensions to general compound model based on the existing literature would again suffer from the rate suboptimality and require such type of extra conditions.
-
When specialized to the sparse additive model, our results are adaptive with respect to the smoothness of the atoms, while all the previous work about the rates considered the smoothness (or the reproducing kernel) as given in advance.
For the general compound model, the main difficulty is in the proof of the lower bounds of the order \(ms \varepsilon ^2\log ( d/(s m^{1/s}))\) that are not covered by the standard tools such as the Varshamov-Gilbert lemma or \(k\)-selection lemma. Therefore, we developed here new tools for the lower bounds that can be of independent interest.
An important issue that remained out of scope of the present work but is undeniably worth studying is the possibility of achieving the minimax rates by computationally tractable procedures. Clearly, the complexity of exact computation of the procedure described in Sect. 3 scales as \(\varepsilon ^{-2m}2^{M_{d,s}}\), which is prohibitively large for typical values of \(d, s\) and \(m\). It is possible, however, to approximate our estimator by using a Markov Chain Monte-Carlo (MCMC) algorithm similar to that of [23, 24]. The idea is to begin with an initial state \(({\varvec{t}}_0,{\varvec{\eta }}_0)\) and to randomly generate a new candidate \(({\varvec{u}},{\varvec{\zeta }})\) according to the distribution \(q(\cdot |{\varvec{t}}_0,{\varvec{\eta }}_0)\), where \(q(\cdot |\cdot )\) is a given Markov kernel. Then, a Bernoulli random variable \(\xi \) with probability of the output 1 equal to \(\alpha = 1\wedge \frac{\widehat{\pi }({\varvec{u}},{\varvec{\zeta }})}{\widehat{\pi }({\varvec{t}},{\varvec{\eta }})}\frac{q({\varvec{t}},{\varvec{\eta }}|{\varvec{u}},{\varvec{\zeta }})}{q({\varvec{u}},{\varvec{\zeta }}|{\varvec{t}},{\varvec{\eta }})}\) is drawn and a new state \(({\varvec{t}}_1,{\varvec{\eta }}_1)=\xi \cdot ({\varvec{u}},{\varvec{\zeta }})+(1-\xi )\cdot ({\varvec{t}}_0,{\varvec{\eta }}_0)\) is defined. This procedure is repeated \(K\) times producing thus a realization \(\{({\varvec{t}}_k,{\varvec{\eta }}_k); k=0,\ldots ,K\}\) of a reversible Markov chain. Then, the average value \(\frac{1}{K}\sum _{k=1}^K \widehat{\varvec{\theta }}_{{\varvec{t}}_k,{\varvec{\eta }}_k}\) provides an approximation to the estimator \(\widehat{f}_\varepsilon \) defined in Sect. 3.
If \(s\) and \(m\) are small and \(q(\cdot |{\varvec{t}},{\varvec{\eta }}^{\prime })\) is such that all the mass of this distribution is concentrated on the nearest neighbors of the \({\varvec{\eta }}^{\prime }\) in the hypercube of \(2^{M_{d,s}}\) all possible \({\varvec{\eta }}\)’s, then the computations can be performed in a polynomial time. For example, if \(s=2\), i.e., if we allow only pairwise interactions, each step of the algorithm requires \(\sim \varepsilon ^{-2m}d^2\) computations, where the factor \(\varepsilon ^{-2m}\) can be reduced to a power of \(\log (\varepsilon ^{-2})\) by a suitable modification of the estimator. How fast such MCMC algorithms converge to our estimator and what is the most appealing choice for the Markov kernel \(q(\cdot |\cdot )\) are challenging open questions for future research.
Notes
Note that every function of less than \(s\) variables can also be considered as a function of \(s\) variables.
References
Bach, F.: High-dimensional non-linear variable selection through hierarchical kernel learning. Technical report, arXiv:0909.0844, 2009
Bertin, K., Lecué, G.: Selection of variables and dimension reduction in high-dimensional non-parametric regression. Electron. J. Stat. 2, 1224–1241 (2008)
Bickel, P.J., Ritov, Y., Tsybakov, A.B.: Simultaneous analysis of lasso and Dantzig selector. Ann. Statist. 37(4), 1705–1732 (2009)
Bickel, P.J., Ritov, Y., Tsybakov, A.B.: Hierarchical selection of variables in sparse high-dimensional regression. In: Borrowing strength: theory powering applications–a Festschrift for Lawrence D. Brown, volume 6 of Inst. Math. Stat. Collect., pp. 56–69. Institute of Mathematical Statistics, Beachwood, OH, 2010
Brown, L.D., Low, M.G.: Asymptotic equivalence of nonparametric regression and white noise. Ann. Statist. 24(6), 2384–2398 (1996)
Comminges, L., Dalalyan, A.S.: Tight conditions for consistent variable selection in high dimensional nonparametric regression. J. Mach. Learn. Res. Proc. Track 19, 187–206 (2011)
Comminges, L., Dalalyan, A.S.: Tight conditions for consistency of variable selection in the context of high dimensionality. Ann. Statist. 40(6), 2667–2696 (2012). doi:10.1214/12-AOS1046
Dai, D., Rigollet, P., Zhang, T.: Deviation optimal learning using greedy \(Q\)-aggregation. Ann. Stat. 40(3), 1878–1905 (2012)
Dalalyan, A., Reiß, M.: Asymptotic statistical equivalence for scalar ergodic diffusions. Probab. Theory Related Fields 134(2), 248–282 (2006)
Dalalyan, A.S., Tsybakov, A.B.: Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity. Mach. Learn. 72(1–2), 39–61 (2008)
Dalalyan, A.S., Tsybakov, A.B.: Sparse regression learning by aggregation and Langevin Monte-Carlo. J. Comput. System Sci. 78(5), 1423–1443 (2012)
Gayraud, G., Ingster, Y.: Detection of sparse variable functions. Electron. J. Statist. 6, 1409–1448 (2012)
Golubev, G.K., Nussbaum, M., Zhou, H.H.: Asymptotic equivalence of spectral density estimation and Gaussian white noise. Ann. Statist. 38(1), 181–214 (2010)
Ingster, Yu., Lepski, O.: Multichannel nonparametric signal detection. Math. Methods Statist. 12(3), 247–275 (2003)
Kekatos, V., Giannakis, G.B.: Sparse Volterra and polynomial regression models: recoverability and estimation. IEEE Trans. Signal Process. 59(12), 5907–5920 (2011)
Koltchinskii, V., Yuan, M.: Sparsity in multiple kernel learning. Ann. Statist. 38(6), 3660–3695 (2010)
Lecué, G., Mendelson, S.: On the optimality of the aggregate with exponential weights for low temperatures. Bernoulli (2012, to appear)
Leung, G., Barron, A.R.: Information theory and mixing least-squares regressions. IEEE Trans. Inform. Theory 52(8), 3396–3410 (2006)
Meier, L., van de Geer, S., Bühlmann, P.: High-dimensional additive modeling. Ann. Statist. 37(6B), 3779–3821 (2009)
Nazer, B., Nowak, R.: Sparse interactions: identifying high-dimensional multilinear systems via compressed sensing. In: Proceedings of the Allerton Conference. Monticello, IL (2010)
Raskutti, G., Wainwright, M.J., Yu, B.: Minimax-optimal rates for sparse additive models over kernel classes via convex programming. J. Mach. Learn. Res. 13, 389–427 (2012)
Reiß, M.: Asymptotic equivalence for nonparametric regression with multivariate and random design. Ann. Statist. 36(4), 1957–1982 (2008)
Rigollet, P., Tsybakov, A.B.: Exponential screening and optimal rates of sparse estimation. Ann. Statist. 39(2), 731–771 (2011)
Rigollet, P., Tsybakov, A.B.: Sparse estimation by exponential weighting. Statist. Sci. 27(4), 558–575 (2012)
Rosasco, L., Villa, S., Mosci, S., Santoro, M., Verri, A.: Nonparametric sparsity and regularization. Technical report, arXiv:1208.2572v1, 2012
Stone, C.J.: Additive regression and other nonparametric models. Ann. Statist. 13(2), 689–705 (1985)
Suzuki, T.: PAC-Bayesian bound for gaussian process regression and multiple kernel additive model. In: COLT, arXiv:1102.3616v1 [math.ST], 2012
Tsybakov, A.B.: Introduction to nonparametric estimation. Springer Series in Statistics. Springer, New York (2009)
van de Geer, S., Bühlmann, P.: Statistics for High-Dimensional Data. Springer Texts in Statistics. Springer-Verlag, New York, second edition, 2011
Acknowledgments
The authors acknowledge the support of the French Agence Nationale de la Recherche (ANR) under the grant PARCIMONIE.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Proof of Lemma 1
Let \({\varvec{\eta }}\in \tilde{\mathcal{B }}\) be such that \(f\in \mathcal{F }_{\varvec{\eta }}(\varvec{W}(\beta ,L))\) and \(\mathop {\text{ supp}}({\varvec{\eta }})=\{V_1,\ldots ,V_m\}\) where \(|V_\ell |\le s\). Then there exist a constant \(\bar{f}\) and \(m\) functions \(f_1,\ldots ,f_m\) such that \(f_\ell \in W_{V_\ell }(\beta ,L), \ell =1,\dots ,m\), and \(f=\bar{f}+f_1+\cdots +f_m\). Set \(\theta _{{\varvec{j}},\ell }=\theta _{\varvec{j}}[f_\ell ], ({\varvec{j}},\ell )\in \mathbb{Z }^d\times \{1,\dots ,m\}\). Using the notation \(t_\ell =t_{V_\ell }\) and
we get
where \((\xi _{\varvec{j}})_{{\varvec{j}}\in \mathbb{Z }^d}\) are i.i.d. Gaussian random variables with zero mean and variance one. In view of the bias-variance decomposition and (2), we bound the risk of \(\widehat{f}_{{\varvec{t}},{\varvec{\eta }}}\) as follows:
In the right-hand side of (27), the first summand is the variance term, while the second summand is the (squared) bias term of the risk. We bound these two terms separately. For the bias contribution to the risk, we find:
If \(t_\ell \ge 1\), then the variance contribution to the risk is bounded as follows:
where we have used that \(t_\ell \le (L/3^{|V_\ell |}\varepsilon ^2)^{1/(2\beta +|V_\ell |)}\) and \(|V_\ell |\le s\). Finally, note that condition \(\log (\varepsilon ^{-2})\ge (2\beta )^{-1}\log (L)\) implies that \(L\varepsilon ^{4\beta } < L^{s/(2\beta +s)}\varepsilon ^{4\beta /(2\beta +s)}\) in (28). Thus, inequality (27) together with (28) and (29) yields the lemma in the case \(t_\ell \ge 1\). If \(t_\ell <1\), i.e., \(t_\ell =0\), the same arguments imply that the bias is bounded by \(L\) and the variance is bounded by \(\varepsilon ^2\). Since \(L\ge \varepsilon ^2\), the sum at the right-hand side of (27) is bounded by \((1+C_*)L\). One can check that \(t_\ell \) equals \(0\) only if \(L<3^s\varepsilon ^{-2}\), and in this case \(L= L^{s/(2\beta +s)}L^{2\beta /(2\beta +s)} \le L^{s/(2\beta +s)}\varepsilon ^{-4\beta /(2\beta +s)} 3^{2\beta s/(2\beta +s)}\le 3^{2\beta \wedge s}L^{s/(2\beta +s)}\varepsilon ^{-4\beta /(2\beta +s)} \). This completes the proof.
Appendix B: Proof of Lemma 2
Prior to presenting a proof of Lemma 2, we need an additional result.
Lemma 3
For a triplet \((m,s,d)\in \mathbb{N }_*^3\) satisfying \(ms\le d\), let \(\mathcal{P }_{s,m}^d\) be the set of all collections \(\pi =\{A_1,\ldots ,A_m\}\) with \(A_i\subseteq \{1,\dots ,d\}\) such that \(|A_i|=s\) for all \(i\) and \(A_i\cap A_k=\emptyset \) for \(i\ne k\). Then
Proof
Using standard combinatorial arguments we find
If either \(s=1\) or \(m=1\) then \({(ms)!}={(s!)^m m!}\) and the lower bound stated in the lemma is obviously true. Assume now that \(m\ge 2\) and \(s\ge 2\). Recall that according to the Stirling formula, for every \(n\in \mathbb{N }, \sqrt{2\pi n}(n/e)^{n}\le n!\le \sqrt{2\pi n}(n/e)^{n} e^{1/12n}\). Therefore,
Since the expression in square brackets in the last display is \(>\)1 we obtain the desired lower bound on \(|\mathcal{P }_{s,m}^d|\). The upper bound follows from (19) and the fact that \(|\mathcal{P }_{s,m}^d|\le |\mathcal{B }_{s,m}^d|\).
Proof of Lemma 2
Consider first the case \(m=1\). The set \(\mathcal{P }^d_{s,1}\) is the collection of all subsets of \(\{ 1,\dots ,d\}\) having exactly \(s\) elements. The distance \(\rho \) is then 0 if the sets coincide and 1 otherwise. Thus, we need to bound from below the logarithm of \(|\mathcal{P }^d_{s,1}|=\genfrac(){0.0pt}{}{d}{s}\). It is enough to use the inequality \(\log \genfrac(){0.0pt}{}{d}{s}\ge s\log (d/s)\).
Assume now that \(m\ge 2\). Since \(\pi ^{(1)},\ldots ,\pi ^{(K)}\) is a maximal \(\vartheta \)-separated set of \(\mathcal{P }_{s,m}^d\) we have that \(\mathcal{P }_{s,m}^d\) is covered by the union of \(\rho \)-balls \(B(\pi ^{(k)},\vartheta )\) of radius \(\vartheta \) centered at \(\pi ^{(k)}\)’s. Therefore,
It is clear that the cardinality of the ball \(|B(\pi ^{(k)},\vartheta )|\) does not depend on \(\pi ^{(k)}\). This yields
where \(\pi ^0=\{A^0_1,\ldots ,A^0_m\}\) such that \(A^0_i=\{ (i-1)s+1,\dots , is\}\). We have already established a lower bound on \(|\mathcal{P }_{s,m}^d|\) in Lemma 3. We now find an upper bound on the cardinality of the ball \(B(\pi ^{0},\vartheta )\). Let \(m_\vartheta \) be the smallest integer greater than or equal to \((1-\vartheta )m\). Consider some \(\pi =\{A_1,\ldots ,A_m\}\in \mathcal{P }_{s,m}^d\). Note that \(\pi \in B(\pi ^{0},\vartheta )\) if and only if
This means that there are \(m_\vartheta \) indexes \(i_1,\ldots ,i_{m_\vartheta }\) such that the \(m_\vartheta \) sets \(A^0_{i_j}\) are in \(\pi \) and the remaining \(m-m_\vartheta \) elements of \(\pi \) are chosen as an arbitrary collection of \(m-m_\vartheta \) disjoint subsets of \(\{ 1,\dots ,d\}\setminus \bigcup _{j=1}^{m_\vartheta } A^0_{i_j}\), each of which is of cardinality \(s\). There are \(\genfrac(){0.0pt}{}{m}{m_\vartheta }\) ways of choosing \(\{i_1,\ldots ,i_{m_\vartheta }\}\) and once this choice is fixed, there are \(|\mathcal{P }_{s,m-m_\vartheta }^{d-sm_\vartheta }|\) ways of choosing the remaining parts. Thus, \(|B(\pi ^{0},\vartheta )|\le \genfrac(){0.0pt}{}{m}{m_\vartheta }|\mathcal{P }_{s,m-m_\vartheta }^{d-sm_\vartheta }|\). Using this inequality and Lemma 3 we obtain
Note first that the last fraction in the right-hand side above is not smaller than \(1/2^m\). Furthermore, since \(\vartheta \le 1/8\) we have \(m_\vartheta \ge m\big (1-\vartheta \big )\ge 7m/8\) and after some algebra we deduce from the previous display that
Since \(d\ge 4s m^{1/s}\), we have
and the result of the lemma follows from the inequality \(7/8-1/(4\log (4))\ge 2/3\).
Rights and permissions
About this article
Cite this article
Dalalyan, A., Ingster, Y. & Tsybakov, A.B. Statistical inference in compound functional models. Probab. Theory Relat. Fields 158, 513–532 (2014). https://doi.org/10.1007/s00440-013-0487-y
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-013-0487-y
Keywords
- Compound functional model
- Minimax estimation
- Sparse additive structure
- Dimension reduction
- Structure adaptation