1 Introduction

High dimensional statistical inference has known a tremendous development over the past 10 years motivated by applications in various fields such as bioinformatics, computer vision, financial engineering. The most intensively investigated models in the context of high-dimensionality are the (generalized) linear models, for which efficient procedures are well known and the theoretical properties are well understood (cf., for instance, [3, 10, 11, 29]). More recently, increasing interest is demonstrated for studying non-linear models in high-dimensional setting [6, 12, 16, 21, 27] under various types of sparsity assumption. The present paper introduces a general framework that unifies these studies and describes the theoretical limits of statistical procedures in high-dimensional non-linear problems.

In order to reduce the technicalities and focus on the main ideas, we consider the Gaussian white noise model, which is known to be asymptotically equivalent, under some natural conditions, to the model of regression [5, 22], as well as to other nonparametric models [9, 13]. Thus, we assume that we observe a real-valued Gaussian process \({\varvec{Y}}=\{Y(\phi ):\phi \in L^2([0,1]^d)\}\) such that

$$\begin{aligned} \mathbf{E}_{f}[Y(\phi )]=\int _{[0,1]^d} {f}({\varvec{x}})\,\phi ({\varvec{x}})\,d{\varvec{x}},\qquad \mathbf{Cov}_f(Y(\phi ),Y(\phi ^{\prime }))=\varepsilon ^2\int _{[0,1]^d} \phi ({\varvec{x}})\phi ^{\prime }({\varvec{x}})\,d{\varvec{x}}, \end{aligned}$$

for all \(\phi ,\phi ^{\prime } \in L^2([0,1]^d)\), where \(f\) is an unknown function in \(L^2([0,1]^d), \mathbf{E}_{f}\) and \(\mathbf{Cov}_{f}\) are the expectation and covariance signs, and \(\varepsilon \) is some positive number. It is well known that these two properties uniquely characterize the probability distribution of a Gaussian process that we will further denote by \(\mathbf{P}\!_f\) (respectively, by \(\mathbf{P}\!_0\) if \(f\equiv 0\)). Alternatively, \({\varvec{Y}}\) can be considered as a trajectory of the process

$$\begin{aligned} dY({\varvec{x}})={f}({\varvec{x}})\,d{\varvec{x}}+\varepsilon dW({\varvec{x}}),\quad {\varvec{x}}\in [0,1]^d, \end{aligned}$$

where \(W({\varvec{x}})\) is a \(d\)-parameter Brownian sheet. The parameter \(\varepsilon \) is assumed known; in the model of regression it corresponds to the quantity \(\sigma ^2 n^{-1/2}\), where \(\sigma ^2\) is the variance of noise. Without loss of generality, we assume in what follows that \(0<\varepsilon <1\).

1.1 Notation

First, we introduce some notation. Vectors in finite-dimensional spaces and infinite sequences will be denoted by boldface letters, vector norms will be denoted by \(|\cdot |\) while function norms will be denoted by \(\Vert \cdot \Vert \). Thus, for \(\mathbf{v}=(v_1,\dots ,v_d)\in \mathbb{R }^d\) we set

$$\begin{aligned} |\mathbf{v}|_0=\sum \limits _{j=1}^{d}\mathbf{1}(v_j\ne 0),\quad |\mathbf{v}|_\infty =\max _{j= 1,\dots ,d} |v_j|, \quad |\mathbf{v}|_q^q=\sum \limits _{j=1}^{d}|v_j|^q,\ 1\le q<\infty , \end{aligned}$$

whereas for a function \(f:[0,1]^d\rightarrow \mathbb{R }\) we set

$$\begin{aligned} \Vert f\Vert _\infty =\sup _{{\varvec{x}}\in [0,1]^d} |f({\varvec{x}})|,\qquad \Vert f\Vert _q^q=\int _{[0,1]^d} |f({\varvec{x}})|^q\,d{\varvec{x}},\ 1\le q<\infty . \end{aligned}$$

We denote by \(L^2_0([0,1]^d)\) the subspace of \(L^2([0,1]^d)\) containing all the functions \(f\) such that \(\int _{[0,1]^d} f({\varvec{x}})\,d{\varvec{x}}=0\). The notation \(\langle \cdot ,\cdot \rangle \) will be used for the inner product in \(L^2([0,1]^d)\), that is \(\langle h,\tilde{h}\rangle =\int _{[0,1]^d} h({\varvec{x}})\tilde{h}({\varvec{x}})\,d{\varvec{x}}\) for any \(h,\tilde{h}\in L^2([0,1]^d)\). For two integers \(a\) and \(a^{\prime }\), we denote by \({[\![}a,a^{\prime }{]\!]}\) the set of all integers belonging to the interval \([a,a^{\prime }]\). We denote by \([t]\) the integer part of a real number \(t\). For a finite set \(V\), we denote by \(|V|\) its cardinality. For a vector \({\varvec{x}}\in \mathbb{R }^d\) and a set of indices \(V\subseteq \{ 1,\dots ,d\}\), the vector \({\varvec{x}}_V\in \mathbb{R }^{|V|}\) is defined as the restriction of \({\varvec{x}}\) to the coordinates with indices belonging to \(V\). For every \(s\in \{ 1,\dots , d\}\) and \(m\in \mathbb{N }\), we define \(\mathcal{V }_s^d=\big \{V\subseteq \{ 1,\dots ,d\}:|V|\le s\big \}\) and the set of binary vectors \(\mathcal{B }_{s,m}^d=\big \{{\varvec{\eta }}\in \{0,1\}^{\mathcal{V }_s^d}:|{\varvec{\eta }}|_0=m\big \}\). We also use the notation \(M_{d,s}\triangleq |\mathcal{V }_s^d|\). We extend these definitions to \(s=0\) by setting \(\mathcal{V }_0^d=\{\emptyset \}, M_{d,0}=1, |\mathcal{B }_{0,1}^d|=1\), and \(|\mathcal{B }_{0,m}^d|=0\) for \(m>1\). For a vector \(\varvec{a}\), we denote by \(\mathop {\text{ supp}}(\varvec{a})\) the set of indices of its non-zero coordinates. In particular, the support \(\mathop {\text{ supp}}({\varvec{\eta }})\) of a binary vector \({\varvec{\eta }}=\{\eta _V\}_{V\in \mathcal{V }_s^d} \in \mathcal{B }_{s,m}^d\) is the set of \(V\)’s such that \(\eta _V=1\).

1.2 Compound functional model

In this paper we impose the following assumption on the unknown function \(f\).

Compound functional model There exists an integer \(s\in \{ 1,\dots ,d\}\), a binary sequence \({\varvec{\eta }}\in \mathcal{B }_{s,m}^d\), a set of functions \(\{f_V\in L^2_0([0,1]^{|V|})\}_{V\in \mathcal{V }_s^d}\) and a constant \(\bar{f}\) such that

$$\begin{aligned} f({\varvec{x}})=\bar{f}+\sum _{V\in \mathcal{V }_s^d} f_V({\varvec{x}}_V)\eta _V =\bar{f}+ \sum _{V\in \mathop {\text{ supp}}({\varvec{\eta }})} f_V({\varvec{x}}_V),\qquad \forall {\varvec{x}}\in \mathbb{R }^d. \end{aligned}$$
(1)

The functions \(f_V\) are called the atoms of the compound model.

Note that, under the compound model, \(\bar{f}=\int _{[0,1]^d} f({\varvec{x}})\,d{\varvec{x}}\).

The atoms \(f_V\) are assumed to be sufficiently regular, namely, each \(f_V\) is an element of a suitable functional class \(\Sigma _V\). In particular, one can consider a smoothness class \(\Sigma _V\) and more specifically the Sobolev ball of functions of \(s\) variables.Footnote 1 In what follows, we will mainly deal with this example.

Given a collection \({\varvec{\Sigma }}=\{\Sigma _V\}_{V\in \mathcal{V }_s^d}\) of subsets of \(L^2_0([0,1]^s)\) and a subset \(\tilde{\mathcal{B }}\) of \(\mathcal{B }_{s,m}^d\), we define the classes

$$\begin{aligned} \mathcal{F }_{s,m}({\varvec{\Sigma }})=\bigcup \limits _{{\varvec{\eta }}\in \tilde{\mathcal{B }}} \mathcal{F }_{{\varvec{\eta }}}({\varvec{\Sigma }}), \end{aligned}$$

where

$$\begin{aligned} \mathcal{F }_{{\varvec{\eta }}}({\varvec{\Sigma }})&= \left\{ f:\mathbb{R }^d\rightarrow \mathbb{R }: \exists \bar{f}\in \mathbb{R }, \{f_V\}_{V\in \mathop {\text{ supp}}({\varvec{\eta }})}, f_V\in \Sigma _V, \text{ such} \text{ that} f=\bar{f}\right.\\&\left.\quad +\sum _{V\in \mathop {\text{ supp}}({\varvec{\eta }})} f_V \right\} . \end{aligned}$$

The class \(\mathcal{F }_{s,m}({\varvec{\Sigma }})\) is defined for any \(s\in \{ 0,\dots ,d\}\) and any \(m \in \{0,\dots , M_{d,s}\}\). In what follows, we assume that \(\tilde{\mathcal{B }}\) is fixed and for this reason we do not include it in the notation. Examples of \(\tilde{\mathcal{B }}\) can be the set of all \({\varvec{\eta }}\in \mathcal{B }_{s,m}^d\) such that \(V\in \mathop {\text{ supp}}({\varvec{\eta }})\) are pairwise disjoint or of all \({\varvec{\eta }}\in \mathcal{B }_{s,m}^d\) such that every set \(V\) from \(\mathop {\text{ supp}}({\varvec{\eta }})\) has a non-empty intersection with at most one other set from \(\mathop {\text{ supp}}({\varvec{\eta }})\).

It is clear from the definition that the parameters \(\big ({\varvec{\eta }},\{f_V\}_{V\in \mathop {\text{ supp}}({\varvec{\eta }})}\big )\) are not identifiable. Indeed, two different collections \(\big ({\varvec{\eta }},\{f_V\}_{V\in \mathop {\text{ supp}}({\varvec{\eta }})}\big )\) and \(\big (\bar{\varvec{\eta }},\{\bar{f}_V\}_{V\in \mathop {\text{ supp}}(\bar{\varvec{\eta }})}\big )\) may lead to the same compound function \(f\). Of course, this is not necessarily an issue as long as only the problem of estimating \(f\) is considered.

We now define the Sobolev classes of functions of many variables that will play the role of \(\Sigma _V\). Consider an orthonormal system of functions \(\{\varphi _{\varvec{j}}\}_{{\varvec{j}}\in \mathbb{Z }^d}\) in \( L^2([0,1]^d)\) such that \(\varphi _{\varvec{0}}({\varvec{x}})\equiv 1\). We assume that the system \(\{\varphi _{\varvec{j}}\}\) and the set \(\tilde{\mathcal{B }}\) are such that

$$\begin{aligned} \left\Vert\sum _{V\in \mathop {\text{ supp}}({\varvec{\eta }})} \sum _{\begin{array}{c} {\varvec{j}}: {\varvec{j}}\not =\varvec{0}\\ \mathop {\text{ supp}}({\varvec{j}})\subseteq V \end{array}} \theta _{{\varvec{j}},V} \varphi _{\varvec{j}}\right\Vert^2_2 \le C_*\sum _{V\in \mathop {\text{ supp}}({\varvec{\eta }})} \sum _{\begin{array}{c} {\varvec{j}}: {\varvec{j}}\not =\varvec{0}\\ \mathop {\text{ supp}}({\varvec{j}})\subseteq V \end{array}} \theta _{{\varvec{j}},V}^2, \end{aligned}$$
(2)

for all \({\varvec{\eta }}\in \tilde{\mathcal{B }}\) and all square-summable arrays \((\theta _{{\varvec{j}},V}, \,({\varvec{j}},V)\in \mathbb{Z }^d\times \mathcal{V }_s^d)\), where \(C_*>0\) is a constant independent of \(s,m\) and \(d\). For example, this condition holds with \(C_*=1\) if \(\tilde{\mathcal{B }}\) is the set of all \({\varvec{\eta }}\in \mathcal{B }_{s,m}^d\) such that \(V\in \mathop {\text{ supp}}({\varvec{\eta }})\) are pairwise disjoint and with \(C_*=2\) if \(\tilde{\mathcal{B }}\) is the set of all \({\varvec{\eta }}\in \mathcal{B }_{s,m}^d\) such that every set \(V\) from \(\mathop {\text{ supp}}({\varvec{\eta }})\) has a non-empty intersection with at most one other set from \(\mathop {\text{ supp}}({\varvec{\eta }})\).

One example of \(\{\varphi _{\varvec{j}}\}_{{\varvec{j}}\in \mathbb{Z }^d}\) is a tensor product orthonormal basis:

$$\begin{aligned} \varphi _{\varvec{j}}({\varvec{x}})=\bigotimes \limits _{\ell =1}^d \varphi _{j_\ell }(x_\ell ),\qquad \end{aligned}$$
(3)

where \({\varvec{j}}=(j_1,\dots ,j_d)\in \mathbb{Z }^d\) is a multi-index and \(\{\varphi _{k}\}, \,k\in \mathbb{Z }\), is an orthonormal basis in \( L^2([0,1])\). Specifically, we can take the trigonometric basis with \(\varphi _{0}(u)\equiv 1\) on \([0,1], \varphi _{k}(u)=\sqrt{2}\cos (2\pi \,k u)\) for \(k>0\) and \(\varphi _{k}(u)=\sqrt{2}\sin (2\pi \,k u)\) for \(k<0\). To ease notation, we set \(\theta _{\varvec{j}}[{f}]=\langle {f},\varphi _{\varvec{j}}\rangle \) for \({\varvec{j}}\in \mathbb{Z }^d\).

For any set of indices \(V\subseteq \{1,\dots ,d\}\) and any \(\beta >0, L>0\), we define the Sobolev class of functions

$$\begin{aligned} W_V(\beta ,L)&= \bigg \{g\in L^2_0([0,1]^{d}) :\quad g=\sum _{{\varvec{j}}\in \mathbb{Z }^d:\mathop {\text{ supp}}({\varvec{j}})\subseteq V} \theta _{\varvec{j}}[g]\varphi _{\varvec{j}}\quad \text{ and} \quad \nonumber \\&\quad \sum _{{\varvec{j}}\in \mathbb{Z }^d} |{\varvec{j}}|_\infty ^{2\beta } \theta _{\varvec{j}}[g]^2\le L\bigg \}. \end{aligned}$$
(4)

Assuming that \(\{\varphi _{\varvec{j}}\}\) is the trigonometric basis and \(f\) is periodic with period one in each coordinate, i.e., \(f({\varvec{x}}+{\varvec{j}})=f({\varvec{x}})\) for every \({\varvec{x}}\in \mathbb{R }^d\) and every \({\varvec{j}}\in \mathbb{Z }^d\), the condition \(f_V\in W_{V}(\beta ,L)\) can be interpreted as the square integrability of all partial derivatives of \(f_V\) up to the order \(\beta \).

Let us give some examples of compound models.

  • Additive models are the special case \(s=1\) of compound models. Here, additive models are understood in a wider sense than originally defined by [26]. Namely, for \(s=1\) we have the model

    $$\begin{aligned} f({\varvec{x}})=\bar{f}+\sum _{j\in J} f_j(x_j), \qquad {\varvec{x}}=(x_1,\dots ,x_d)\in \mathbb{R }^d, \end{aligned}$$

    where \(J\) is any (unknown) subset of indices and not necessarily \(J=\{1,\dots , d\}\). Estimation and testing problems in this model when the atoms belong to some smoothness classes have been studied in [12, 14, 16, 19, 21, 27].

  • Single atom models are the special case \(m=1\) of compound models. If \(m=1\) we have \(f({\varvec{x}})=f_V({\varvec{x}}_V)\) for some unknown \(V\subseteq \{ 1,\dots ,d\}\), i.e., there exists only one set \(V\) for which \(\eta _V=1\), and \(|V|\le s\). Estimation and variable selection in this model were considered by [2, 7, 25]. The case of small \(s\) and large \(d\) is particularly interesting in the context of sparsity. In a parametric model, when \(f_V\) is a linear function, we are back to the sparse high-dimensional linear regression setting, which has been extensively studied, see, e.g., [29].

  • Tensor product models Let \(\mathcal{A }\) be a given finite subset of \(\mathbb{Z }\), and assume that \(\varphi _{\varvec{j}}\) is a tensor product basis defined by (3). Consider the following parametric class of functions

    $$\begin{aligned} {\varvec{T}}_{{\varvec{\eta }}}(\mathcal{A })&= \left\{ f:\mathbb{R }^d\rightarrow \mathbb{R }: \exists \bar{f} , \{\theta _{{\varvec{j}},V}\}, \text{ such} \text{ that} f\right.\nonumber \\&= \left.\bar{f}+\sum _{V\in \mathop {\text{ supp}}({\varvec{\eta }})}\sum _{{\varvec{j}}\in \mathcal{J }_{V,\mathcal{A }}} \theta _{{\varvec{j}},V}\varphi _{\varvec{j}}\right\} , \end{aligned}$$
    (5)

    where

    $$\begin{aligned} \mathcal{J }_{V,\mathcal{A }}=\Big \{ {\varvec{j}}\in \mathcal{A }^d : \mathop {\text{ supp}}({\varvec{j}}) \subseteq V \Big \}. \end{aligned}$$
    (6)

    We say that function \(f\) satisfies the tensor product model if it belongs to the set \({\varvec{T}}_{\varvec{\eta }}(\mathcal{A })\) for some \({\varvec{\eta }}\in \tilde{\mathcal{B }}\). We define

    $$\begin{aligned} \mathcal{F }_{s,m}({\varvec{T}}_{\!\!\mathcal{A }})=\bigcup \limits _{{\varvec{\eta }}\in \tilde{\mathcal{B }}} {\varvec{T}}_{\varvec{\eta }}(\mathcal{A }). \end{aligned}$$

    Important examples are sparse high-dimensional multilinear/polynomial systems. Motivated respectively by applications in genetics and signal processing, they have been recently studied by [20] in the context of compressed sensing without noise and by [15] in the case where the observations are corrupted by a Gaussian noise. With our notation, the models they considered are the tensor product models with \(\mathcal{A }=\{0,1\}\) (linear basis functions \(\varphi _j\)) in the multilinear model of [20] and \(\mathcal{A }=\{-1,0,1\}\) in the Volterra filtering problem of [15] (second-order Volterra systems with \(\varphi _0(x)\equiv 1, \varphi _1(x)\propto (x-1/2)\) and \(\varphi _{-1}(x)\propto x^2-x+1/6\)). More generally, the set \(\mathcal{A }\) should be of small cardinality to guarantee efficient dimension reduction. Another approach is to introduce hierarchical structures on the coefficients of tensor product representation [1, 4].

In what follows, we assume that \(f\) belongs to the functional class \(\mathcal{F }_{s,m}({\varvec{\Sigma }})\) where either \({\varvec{\Sigma }}=\{W_{V}(\beta ,L)\}_{V\in \mathcal{V }_s^d}\triangleq \varvec{W}(\beta ,L)\) or \({\varvec{\Sigma }}={\varvec{T}}_{\!\!\mathcal{A }}\).

The compound model is described by three main parameters, which are the dimension \(m\) that we call the macroscopic parameter and that characterizes the complexity of possible structure vectors \({\varvec{\eta }}\), the dimension \(s\) of atoms in the compound that we call the microscopic parameter, and the complexity of functional class \({\varvec{\Sigma }}\). The latter can be described by entropy numbers of \({\varvec{\Sigma }}\) in convenient norms, and in the particular case of Sobolev classes, it is naturally characterized by the smoothness parameter \(\beta \). The integers \(m\) and \(s\) are “effective dimension” parameters. As soon as they grow, the structure becomes less pronounced and the compound model approaches the global nonparametric regression in dimension \(d\), which is known to suffer from the curse of dimensionality already for moderate \(d\). Therefore, an interesting case is the sparsity scenario where \(s\) and/or \(m\) are small.

2 Overview of the results and relation to the previous work

Several statistical problems arise naturally in the context of compound functional model.

  • Estimation of \(f\) This is the subject of the present paper. We measure the risk of arbitrary estimator \(\widetilde{f}_\varepsilon \) by its mean integrated squared error \(\mathbf{E}_f[\Vert \widetilde{f}_\varepsilon -f\Vert _2^2]\) and we study the minimax risk

    $$\begin{aligned} \inf _{\widetilde{f}_\varepsilon }\sup _{f\in \mathcal{F }_{s,m}({\varvec{\Sigma }})} \mathbf{E}_f[\Vert \widetilde{f}_\varepsilon -f\Vert _2^2], \end{aligned}$$

    where \(\inf _{\widetilde{f}_\varepsilon }\) denotes the minimum over all estimators.Footnote 2 A first general question is to establish the minimax rates of estimation, i.e., to find values \(\psi _{s,m,\varepsilon }({\varvec{\Sigma }})\) such that

    $$\begin{aligned} \inf _{\widetilde{f}_\varepsilon }\sup _{f\in \mathcal{F }_{s,m}({\varvec{\Sigma }})} \mathbf{E}_f[\Vert \widetilde{f}_\varepsilon -f\Vert _2^2] \asymp \psi _{s,m,\varepsilon }({\varvec{\Sigma }}), \end{aligned}$$

    when \({\varvec{\Sigma }}\) is a Sobolev, Hölder or other class of functions. A second question is to construct optimal estimators in a minimax sense, i.e., estimators \(\widehat{f}_\varepsilon \) such that

    $$\begin{aligned} \sup _{f\in \mathcal{F }_{s,m}({\varvec{\Sigma }})} \mathbf{E}_f[\Vert \widehat{f}_\varepsilon -f\Vert _2^2] \le C \psi _{s,m,\varepsilon }({\varvec{\Sigma }}), \end{aligned}$$
    (7)

    for some constant \(C\) independent of \(s,m,\varepsilon \) and \({\varvec{\Sigma }}\). Some results on minimax rates of estimation of \(f\) are available only for the case \(s=1\) (cf. the discussion below). Finally, a third question that we address here is whether the optimal rate can be attained adaptively, i.e., whether one can construct an estimator \(\widehat{f}_\varepsilon \) that satisfies (7) simultaneously for all \(s,m,\beta \) and \(L\) when \({\varvec{\Sigma }}=\varvec{W}(\beta ,L)\). We will show that the answer to this question is positive.

  • Variable selection Assume that \(m=1\). This means that \(f({\varvec{x}})=f_V({\varvec{x}}_V)\) for some unknown \(V\subseteq \{ 1,\dots ,d\}\), i.e., there exists only one set \(V\) for which \(\eta _V=1\) (a single atom model). Then it is of interest to identify \(V\) under the constraint \(|V|\le s\). In particular, \(d\) can be very large while \(s\) can be small. This corresponds to estimating the relevant covariates and generalizes the problem of selection of sparsity pattern in linear regression. An estimator \(\widehat{V}_n\subseteq \{ 1,\dots ,d\}\) of \(V\) is considered as good, if the probability \(\mathbf{P}(\widehat{V}_n=V)\) is close to one.

  • Hypotheses testing (detection) The problem is to test the hypothesis \(H_0: f\equiv 0\) (no signal) against the alternative \(H_1 : f\in \mathcal{A }\), where \(\mathcal{A }=\big \{f\in \mathcal{F }_{s,m}({\varvec{\Sigma }}): \Vert f\Vert _2\ge r\big \}\). Here, it is interesting to characterize the minimax rates of separation \(r>0\) in terms of \(s, m\) and \({\varvec{\Sigma }}\).

Some of the above three problems have been studied in the literature for special cases \(s=1\) (additive model) and \(m=1\) (single atom model). Ingster and Lepski [14] studied the problem of testing in additive model and provided asymptotic minimax rates of separation. Sharp asymptotic optimality under additional assumptions in the same problem was obtained by Gayraud and Ingster [12]. Recently, Comminges and Dalalyan [7] established tight conditions for variable selection in the single atom model. We also mention an earlier work of Bertin and Lecué [2] dealing with variable selection.

The problem of estimation has been also considered for additive model and class \({\varvec{\Sigma }}\) defined as a reproducing kernel Hilbert space, cf. Koltchinskii and Yuan [16], Raskutti et al. [21]. In particular, these papers showed that if \(s=1\) and \({\varvec{\Sigma }}=\varvec{W}(\beta ,L)\) is a Sobolev class, then there is an estimator of \(f\) for which the mean integrated squared error converges to zero at the rate

$$\begin{aligned} \max \Big (m\varepsilon ^{4\beta /(2\beta +1)},\;m\varepsilon ^2\log d\Big ). \end{aligned}$$
(8)

Furthermore, Raskutti et al. [21, Thm. 2] provided the following lower bound on the minimax risk:

$$\begin{aligned} \max \left(m\varepsilon ^{4\beta /(2\beta +1)},\;m\varepsilon ^2\log \left(\frac{d}{m}\right)\right). \end{aligned}$$
(9)

Note that when \(m\) is proportional to \(d\), this lower bound departs from the upper bound in a logarithmic way. It should also be noted that the upper bounds in these papers are achieved by estimators that are not adaptive in the sense that they require the knowledge of the smoothness index \(\beta \).

In this paper, we establish non-asymptotic upper and lower bounds on the minimax risk for the model with Sobolev smoothness class \({\varvec{\Sigma }}=\varvec{W}(\beta ,L)\). We will prove that, up to a multiplicative constant, the minimax risk behaves itself as

$$\begin{aligned} \max \left\{ mL^{s/(2\beta +s)}\varepsilon ^{4\beta /(2\beta +s)},\;ms\varepsilon ^2\log \left(\frac{d}{sm^{1/s}}\right)\right\} \wedge L \end{aligned}$$
(10)

(we assume here \(d/(sm^{1/s})>1\), otherwise a constant factor \(>\)1 should be inserted under the logarithm, cf. the results below). In addition, we demonstrate that this rate can be reached in an adaptive way that is without the knowledge of \(\beta , s\), and \(m\). The rate (10) is non-asymptotic, which explains, in particular, the presence of the minimum with constant \(L\) in (10). For \(s=1\), i.e., for the additive regression model, our rate matches the lower bound of Raskutti et al. [21].

For \(m=1\), i.e., when \(f({\varvec{x}})=f_V({\varvec{x}}_V)\) for some unknown \(V\subseteq \{ 1,\dots ,d\}\) (the single atom model), the minimax rate of convergence takes the form

$$\begin{aligned} \max \left\{ L^{s/(2\beta +s)}\varepsilon ^{4\beta /(2\beta +s)},\;s\varepsilon ^2\log \left(\frac{d}{s}\right)\right\} \wedge L. \end{aligned}$$
(11)

This rate accounts for two effects, namely, the accuracy of nonparametric estimation of \(f\) for fixed macroscopic structure parameter \({\varvec{\eta }}\), cf. the first term \(\sim \varepsilon ^{4\beta /(2\beta +s)}\), and the complexity of the structure itself (irrespective of the nonparametric nature of microscopic components \(f_V({\varvec{x}}_V)\)). In particular, the second term \(\sim s\varepsilon ^2\log ({d}/{s})\) in (11) coincides with the optimal rate of prediction in linear regression model under the standard sparsity assumption. This is what we obtain in the limiting case when \(\beta \) tends to infinity. It is important to note that the optimal rates depend only logarithmically on the ambient dimension \(d\). Thus, even if \(d\) is large, the rate optimal estimators achieve nice performance under the sparsity scenario when \(s\) and \(m\) are small.

3 The estimator and upper bounds on the minimax risk

In this section, we suggest an estimator attaining the minimax rate. It is constructed in the following two steps.

  • Constructing weak estimators At this step, we proceed as if the macroscopic structure parameter \({\varvec{\eta }}\) was known and denote by \(V_1,\ldots ,V_m\) the elements of the support of \({\varvec{\eta }}\). The goal is to provide for each \({\varvec{\eta }}\) a family of “simple” estimators of \(f\)—indexed by some parameter \({\varvec{t}}\)—containing a rate-minimax one. To this end, we first project \({\varvec{Y}}\) onto the basis functions \(\{\varphi _{\varvec{j}}:{|{\varvec{j}}|}_\infty \le \varepsilon ^{-2}\}\) and denote

    $$\begin{aligned} {\varvec{Y}}_\varepsilon =(Y_{\varvec{j}}\triangleq Y(\varphi _{\varvec{j}}): {\varvec{j}}\in \mathbb{Z }^d, \, {|{\varvec{j}}|}_\infty \le \varepsilon ^{-2}). \end{aligned}$$
    (12)

    Then, we consider a collection \(\{\widehat{\varvec{\theta }}_{{\varvec{t}},{\varvec{\eta }}}:\ {\varvec{t}}\in \mathbb{Z }^m\cap [1,\varepsilon ^{-2}]^m\}\) of projection estimators of the vector \({\varvec{\theta }}_\varepsilon =(\theta _{\varvec{j}}[f])_{{\varvec{j}}\in \mathbb{Z }^d: |{\varvec{j}}|_\infty \le \varepsilon ^{-2}}\). The role of each component \(t_\ell \) of \({\varvec{t}}\) is to indicate the cut-off level of the coefficients \(\theta _{{\varvec{j}}}\) corresponding to the atom \(f_{V_\ell }\), that is the level of indices beyond of which the coefficients are estimated by \(0\). To be more precise, for an integer-valued vector \({\varvec{t}}=(t_{V_\ell }, \ell =1,\dots , m)\in [0, \varepsilon ^{-2}]^m\) we set \(\widehat{\varvec{\theta }}_{{\varvec{t}},{\varvec{\eta }}}=(\widehat{\theta }_{{\varvec{t}},{\varvec{\eta }},{\varvec{j}}}:{\varvec{j}}\in \mathbb{Z }^d, \,{|{\varvec{j}}|}_\infty \le \varepsilon ^{-2})\), where \(\widehat{\theta }_{{\varvec{t}},{\varvec{\eta }},\varvec{0}}=Y_{\varvec{0}}\) and

    $$\begin{aligned} \widehat{\theta }_{{\varvec{t}},{\varvec{\eta }},{\varvec{j}}}= {\left\{ \begin{array}{ll} Y_{\varvec{j}},&\exists \ell \, \text{ s.} \text{ t.} \mathop {\text{ supp}}({\varvec{j}})\subseteq V_\ell \, , \, {|{{\varvec{j}}}|}_\infty \in [1, t_{V_\ell }], \\ 0,&\text{ otherwise} \end{array}\right.} \end{aligned}$$

    if \({\varvec{j}}\ne \varvec{0}\). Based on these estimators of the coefficients of \(f\), we recover the function \(f\) using the estimator

    $$\begin{aligned} \widehat{f}_{{\varvec{t}},{\varvec{\eta }}}({\varvec{x}})=\sum \limits _{{\varvec{j}}\in \mathbb{Z }^d:{|{\varvec{j}}|}_\infty \le \varepsilon ^{-2}} \widehat{\theta }_{{\varvec{t}},{\varvec{\eta }},{\varvec{j}}} \varphi _{\varvec{j}}({\varvec{x}}). \end{aligned}$$
  • Smoothness- and structure-adaptive estimation The goal in this step is to combine the weak estimators \(\{\widehat{f}_{{\varvec{t}},{\varvec{\eta }}}\}_{{\varvec{t}},{\varvec{\eta }}}\) in order to get a structure and smoothness adaptive estimator of \(f\) with a risk which is as small as possible. To this end, we use a version of exponentially weighted aggregate [10, 11, 18] in the spirit of sparsity pattern aggregation as described in [23, 24]. More precisely, for every pair of integers \((s,m)\) such that \(s\in \{1,\ldots ,d\}\) and \(m\in \{1,\ldots ,M_{d,s}\}\), we define prior probabilities for \(({\varvec{t}},{\varvec{\eta }})\in {[\![}0,\varepsilon ^{-2}{]\!]}^m\times (\mathcal{B }_{s,m}^d\setminus \mathcal{B }_{s-1,m}^d)\) by

    $$\begin{aligned} \pi _{{\varvec{t}},{\varvec{\eta }}}=\frac{2^{-sm}}{H_d(1+[\varepsilon ^{-2}])^m|\mathcal{B }_{s,m}^d\setminus \mathcal{B }_{s-1,m}^d|},\qquad H_d=\sum _{s=0}^d\sum _{m=1}^{M_{d,s}} 2^{-sm}\le e. \end{aligned}$$
    (13)

    For \(s=0\) and the unique \({\varvec{\eta }}_0\in \mathcal{B }_{0,1}^d\) we consider only one weak estimator \(\widehat{\varvec{\theta }}_{{\varvec{t}},{\varvec{\eta }}_0}\) with all entries zero except for the entry \(\widehat{\theta }_{{\varvec{t}},{\varvec{\eta }}_0,\varvec{0}}\), which is equal to \(Y_{\varvec{0}}\). We set \(\pi _{{\varvec{t}},{\varvec{\eta }}_0}=1/H_d\). It is easy to see that \({\varvec{\pi }}=\Big (\pi _{{\varvec{t}},{\varvec{\eta }}};({\varvec{t}},{\varvec{\eta }})\in \bigcup _{s,m} \{{[\![}0,\varepsilon ^{-2}{]\!]}^m\times \mathcal{B }_{s,m}^d\}\Big )\) defines a probability distribution. For any pair \(({\varvec{t}},{\varvec{\eta }})\) we introduce the penalty function

    $$\begin{aligned} \text{ pen}({\varvec{t}},{\varvec{\eta }})=2\varepsilon ^2\prod _{V\in \mathop {\text{ supp}}({\varvec{\eta }})} (2t_V+1)^{|V|} \end{aligned}$$

    and define the vector of coefficients \(\widehat{\varvec{\theta }}_\varepsilon =(\widehat{\theta }_{\varepsilon ,{\varvec{j}}}:{\varvec{j}}\in \mathbb{Z }^d, \, {|{\varvec{j}}|}_\infty \le \varepsilon ^{-2})\) by

    $$\begin{aligned} \widehat{\varvec{\theta }}_\varepsilon =\sum _{s=1}^d\sum _{m=1}^{M_{d,s}}\sum _{({\varvec{t}},{\varvec{\eta }})} \widehat{\varvec{\theta }}_{{\varvec{t}},{\varvec{\eta }}} \frac{\exp \big \{-\frac{1}{4\varepsilon ^2}\big (|{\varvec{Y}}_\varepsilon -\widehat{\varvec{\theta }}_{{\varvec{t}},{\varvec{\eta }}}|_2^2+\text{ pen}({\varvec{t}},{\varvec{\eta }})\big )\big \}\pi _{{\varvec{t}},{\varvec{\eta }}}}{\sum _{\bar{s}=1}^d\sum _{\bar{m}=1}^{M_{d,\bar{s}}}\sum _{(\bar{\varvec{t}},\bar{\varvec{\eta }})}\exp \big \{-\frac{1}{4\varepsilon ^2} \big (|{\varvec{Y}}_\varepsilon -\widehat{\varvec{\theta }}_{\bar{\varvec{t}},\bar{\varvec{\eta }}}|_2^2+\text{ pen}(\bar{\varvec{t}},\bar{\varvec{\eta }})\big )\big \}\pi _{\bar{\varvec{t}},\bar{\varvec{\eta }}}}, \end{aligned}$$
    (14)

    where the summations \(\sum _{({\varvec{t}},{\varvec{\eta }})}\) and \(\sum _{(\bar{\varvec{t}},\bar{\varvec{\eta }})}\) correspond to \(({\varvec{t}},{\varvec{\eta }}) \in {[\![}0,\varepsilon ^{-2}{]\!]}^m\times (\mathcal{B }^d_{s,m}\setminus \mathcal{B }^d_{s-1,m})\) and \((\bar{\varvec{t}},\bar{\varvec{\eta }})\in {[\![}0,\varepsilon ^{-2}{]\!]}^{\bar{m}}\times (\mathcal{B }^d_{\bar{s},\bar{m}}\setminus \mathcal{B }^d_{\bar{s}-1,\bar{m}})\), respectively. The final estimator of \(f\) is

    $$\begin{aligned} \widehat{f}_\varepsilon ({\varvec{x}})=\sum \limits _{{\varvec{j}}\in \mathbb{Z }^d:{|{\varvec{j}}|}_\infty \le \varepsilon ^{-2}} \widehat{\theta }_{\varepsilon ,{\varvec{j}}}\varphi _{\varvec{j}}({\varvec{x}}), \qquad \forall {\varvec{x}}\in [0,1]^d. \end{aligned}$$

Note that each \(\widehat{\varvec{\theta }}_{{\varvec{t}},{\varvec{\eta }}}\) is a projection estimator of the vector \({\varvec{\theta }}=(\theta _{\varvec{j}}[f])_{{\varvec{j}}\in \mathbb{Z }^d}\). Hence, \(\widehat{f}_\varepsilon \) is a convex combination of projection estimators. We also note that, to construct \(\widehat{f}_\varepsilon \), we only need to know \(\varepsilon \) and \(d\). Therefore, the estimator is adaptive to all other parameters of the model, such as \(s, m\), the parameters that define the class \({\varvec{\Sigma }}\) and the choice of a particular subset \(\tilde{\mathcal{B }}\) of \(\mathcal{B }^d_{s,m}\).

The following theorem gives an upper bound on the risk of the estimator \(\widehat{f}_\varepsilon \) when \({\varvec{\Sigma }}=\varvec{W}(\beta ,L)\).

Theorem 1

Let \(\beta >0\) and \(L>0\) be such that \(\log (\varepsilon ^{-2})\ge (2\beta )^{-1}\log (L), L>\varepsilon ^{2}\log (e\varepsilon ^{-2})^{\frac{2\beta +s}{s}}\). Let \(\tilde{\mathcal{B }}\) be any subset of \(\mathcal{B }^d_{s,m}\). Assume that condition (2) holds. Then, for some constant \(C({\beta })>0\) depending only on \(\beta \) we have

$$\begin{aligned}&\sup _{f\in \mathcal{F }_{s,m}(\varvec{W}(\beta ,L))}\mathbf{E}_f[\Vert \widehat{f}_\varepsilon -f\Vert _2^2]\nonumber \\&\quad \le (6L)\wedge \Big (m\Big \{ C({\beta })L^{\frac{s}{2\beta +s}} \varepsilon ^{\frac{4\beta }{2\beta +s}}+4s\varepsilon ^2\log \Big (\frac{2e^3d}{sm^{1/s}}\Big )\Big \}\Big )\,. \end{aligned}$$
(15)

Proof

Since the functions \(\varphi _{\varvec{j}}\) are orthonormal, \({\varvec{Y}}_\varepsilon \) is composed of independent Gaussian random variables with common variance equal to \(\varepsilon ^2\). Thus, the array \({\varvec{Y}}_\varepsilon \) defined by (12) obeys the Gaussian sequence model studied in [18]. Therefore, using Parseval’s theorem and [18, Cor. 6] we obtain that the estimator \(\widehat{f}_\varepsilon \) satisfies, for all \(f\),

$$\begin{aligned} \mathbf{E}_f[\Vert \widehat{f}_\varepsilon -f\Vert _2^2]&\le \min _{{\varvec{t}},{\varvec{\eta }}}\Big (\mathbf{E}_f[\Vert \widehat{f}_{{\varvec{t}},{\varvec{\eta }}}-f\Vert _2^2]+4\varepsilon ^2\log (\pi _{{\varvec{t}},{\varvec{\eta }}}^{-1})\Big ), \end{aligned}$$
(16)

where the minimum is taken over all \(({\varvec{t}}, {\varvec{\eta }})\in \bigcup _{s,m} \{{[\![}0,\varepsilon ^{-2}{]\!]}^m\times \mathcal{B }_{s,m}^d\}\). Denote by \({\varvec{\eta }}_0\) the unique element of \(\mathcal{B }_{0,1}^d\) for which \(\mathop {\text{ supp}}({\varvec{\eta }})=\{\emptyset \}\). The corresponding estimator \(\widehat{f}_{{\varvec{t}},{\varvec{\eta }}_0}\) coincides with the constant function equal to \(Y_{\varvec{0}}\) and its risk is bounded by \(\varepsilon ^2+L\) for all \(f\in \mathcal{F }_{s,m}(\varvec{W}(\beta ,L))\). Therefore,

$$\begin{aligned} \sup _{f\in \mathcal{F }_{s,m}(\varvec{W}(\beta ,L))}\mathbf{E}_f[\Vert \widehat{f}_\varepsilon \!-\!f\Vert _2^2]&\!\le \! \sup _{f\!\in \! \mathcal{F }_{s,m}(\varvec{W}(\beta ,L))}\mathbf{E}_f[\Vert \widehat{f}_{{\varvec{t}},{\varvec{\eta }}_0}\!-\!f\Vert _2^2]\!+\!4\varepsilon ^2\log (\pi _{{\varvec{t}},{\varvec{\eta }}_0}^{-1})\nonumber \\&\le \varepsilon ^2+L+4\varepsilon ^2\le 6L. \end{aligned}$$
(17)

Take now any \(f\in \mathcal{F }_{s,m}(\varvec{W}(\beta ,L))\), and let \({\varvec{\eta }}^*\in \tilde{\mathcal{B }}\subseteq \mathcal{B }_{s,m}^d\) be such that \(f\in \mathcal{F }_{{\varvec{\eta }}^*}(\varvec{W}(\beta ,L))\). Then it follows from (16) that

$$\begin{aligned} \mathbf{E}_f[\Vert \widehat{f}_\varepsilon -f\Vert _2^2]&\le \min _{{\varvec{t}}\in {[\![}0,\varepsilon ^{-2}{]\!]}^m}\Big (\mathbf{E}_f[\Vert \widehat{f}_{{\varvec{t}},{\varvec{\eta }}^*}-f\Vert _2^2]+4\varepsilon ^2\log (\pi _{{\varvec{t}},{\varvec{\eta }}^*}^{-1})\Big )\nonumber \\&\le \min _{{\varvec{t}}\in {[\![}0,\varepsilon ^{-2}{]\!]}^m}\mathbf{E}_f[\Vert \widehat{f}_{{\varvec{t}},{\varvec{\eta }}^*}-f\Vert _2^2]+4\varepsilon ^2 \big (m\log (2\varepsilon ^{-2})\nonumber \\&\quad +\,ms\log (2)+\log (e|\mathcal{B }_{s,m}^d|)\big ). \end{aligned}$$
(18)

Note that for all \(d,s\in \mathbb{N }^*\) such that \(s\le d\) we have

$$\begin{aligned} \!\!\!\!\!M_{d,s}\!=\!\sum _{\ell =0}^s \genfrac(){0.0pt}{}{d}{\ell }\!\le \! \bigg (\frac{ed}{s}\bigg )^{s}\quad \text{ and} \quad |\mathcal{B }_{s,m}^d|\!\le \! \genfrac(){0.0pt}{}{M_{d,s}}{m}\!\le \! \bigg (\frac{eM_{d,s}}{m}\bigg )^m\!\le \! \bigg (\frac{e^2d}{sm^{1/s}}\bigg )\!^{ms}.\quad \end{aligned}$$
(19)

Also, we have the following bound on the risk of estimator \(\widehat{f}_{{\varvec{t}},{\varvec{\eta }}}\) for each \({\varvec{\eta }}\in \tilde{\mathcal{B }}\) and for an appropriate choice of the bandwidth parameter \({\varvec{t}}\in {[\![}0,\varepsilon ^{-2}{]\!]}^m\).

Lemma 1

Let \(\beta >0\), \(L\ge \varepsilon ^{2}\) be such that \(\log (\varepsilon ^{-2})\ge (2\beta )^{-1}\log (L)\). Let \({\varvec{t}}\in {[\![}0,\varepsilon ^{-2}{]\!]}^m\) be a vector with integer coordinates \(t_{V_\ell }=[(L/(3^{|V_\ell |}\varepsilon ^2))^{1/(2\beta +|V_\ell |)}\wedge \varepsilon ^{-2}], \ell =1,\dots ,m\). Assume that condition (2) holds. Then

$$\begin{aligned} \sup _{f\in \mathcal{F }_{{\varvec{\eta }}}(\varvec{W}(\beta ,L))}\mathbf{E}_f[\Vert \widehat{f}_{{\varvec{t}},{\varvec{\eta }}}-f\Vert _2^2]&\le 2C_* 3^{2\beta \wedge s}\, m\,L^{s/(2\beta +s)}\varepsilon ^{4\beta /(2\beta +s)},\nonumber \\&\quad \forall \; {\varvec{\eta }}\in \tilde{\mathcal{B }}\subset {\mathcal{B }^d_{s,m}}. \end{aligned}$$
(20)

A proof of this lemma is given in the appendix.

Combining (18) with (19) and (20) yields the following upper bound on the risk of \(\widehat{f}_\varepsilon \) :

$$\begin{aligned}&\sup _{f\in \mathcal{F }_{s,m}(\varvec{W}(\beta ,L))}\mathbf{E}_f[\Vert \widehat{f}_\varepsilon -f\Vert _2^2]\\&\qquad \le m\bigg \{ C_{\beta }L^{\frac{s}{2\beta +s}} \varepsilon ^{\frac{4\beta }{2\beta +s}}+4\varepsilon ^2\log (2\varepsilon ^{-2}) +4s\varepsilon ^2\log \bigg (\frac{2e^3d}{sm^{1/s}}\bigg )\bigg \} \end{aligned}$$

where \(C_{\beta }>0\) is a constant depending only on \(\beta \). The assumptions of the theorem guarantee that \( \varepsilon ^2\log (2\varepsilon ^{-2})\le L^{\frac{s}{2\beta +s}} \varepsilon ^{\frac{4\beta }{2\beta +s}}\), so that the desired result follows from (17) and the last display.

The behavior of the estimator \(\widehat{f}_\varepsilon \) in the case \({\varvec{\Sigma }}={\varvec{T}}_{\!\!\mathcal{A }}\) is described in the next theorem.

Theorem 2

Assume that \(k=\max \{|\ell |: \, \ell \in \mathcal{A }\}<\varepsilon ^{-2}\). Then

$$\begin{aligned} \sup _{f\in \mathcal{F }_{s,m}({\varvec{T}}_{\!\!\mathcal{A }})}\mathbf{E}_f[\Vert \widehat{f}_\varepsilon -f\Vert _2^2]\le m\varepsilon ^2 \Big \{ (2k+1)^s +4\log (2\varepsilon ^{-2})+4s\log \Big (\frac{2e^3d}{sm^{1/s}}\Big )\Big \}\,.\nonumber \\ \end{aligned}$$
(21)

The proof of Theorem 2 follows the same lines as that of Theorem 1. We take \(f\in \mathcal{F }_{s,m}({\varvec{T}}_{\!\!\mathcal{A }})\), and let \({\varvec{\eta }}^*\in \tilde{\mathcal{B }}\subseteq \mathcal{B }_{s,m}^d\) be such that \(f\in \mathcal{F }_{{\varvec{\eta }}^*}({\varvec{T}}_{\!\!\mathcal{A }})\). Let \({\varvec{t}}^*\in \mathbb{R }^m\) be the vector with all coordinates equal to \(k\). Then the same argument as in (18) yields

$$\begin{aligned} \mathbf{E}_f[\Vert \widehat{f}_\varepsilon -f\Vert _2^2]&\le \mathbf{E}_f[\Vert \widehat{f}_{{\varvec{t}}^*,{\varvec{\eta }}^*}-f\Vert _2^2]\nonumber \\&+\, 4\varepsilon ^2 \left(m\log (2\varepsilon ^{-2})+ms\log (2)+\log (e|\mathcal{B }_{s,m}^d|)\right). \end{aligned}$$
(22)

We can write \(\mathrm{supp}({\varvec{\eta }}^*)=\{V_1,\dots ,V_m\}\) where \(|V_\ell |\le s\). Since the model is parametric, there is no bias term in the expression for the risk on the right hand side of (22) and we have (cf. (29)):

$$\begin{aligned} \mathbf{E}_f[\Vert \widehat{f}_{{\varvec{t}}^*,{\varvec{\eta }}^*}-f\Vert _2^2]&\le \sum \limits _{\ell =1}^m\ \ \sum \limits _{{\varvec{j}}:\mathop {\text{ supp}}({\varvec{j}})\subseteq V_\ell }\varepsilon ^2\mathbf{1}_{\{{|{\varvec{j}}|}_\infty \le k\}} \le m\varepsilon ^2(2k+1)^{s}. \end{aligned}$$

Together with (22), this implies (21).

The bound of Theorem 2 is particularly interesting when \(k\) and \(s\) are small. For the examples of multilinear and polynomial systems [15, 20] we have \(k=1\). We also note that the result is much better than what can be obtained by using the Lasso. Indeed, consider the simplest case of single atom tensor product model (\(m=1\)). Since we do not know \(s\), we need to run the Lasso in the dimension \(p=(2k+1)^d\) and we can only guarantee the rate \(\varepsilon ^2\log p=d \varepsilon ^2\log (2 k+1)\), which is linear in the dimension \(d\). If \(d\) is very large and \(s\ll d\), this is much slower than the rate of Theorem 2.

4 Lower bound

In this section, we prove a minimax lower bound on the risk of any estimator over the class \(\mathcal{F }_{s,m}(\varvec{W}(\beta ,L))\). We will assume that \(\{\varphi _{\varvec{j}}\}\) is the tensor-product trigonometric basis and \(\tilde{\mathcal{B }}= \tilde{\mathcal{B }}_{s,m}^d\) where \(\tilde{\mathcal{B }}_{s,m}^d\) denotes the set of all \({\varvec{\eta }}\in \mathcal{B }_{s,m}^d\) such that the sets \(V\in \mathrm{\mathop {\text{ supp}}}({\varvec{\eta }})\) are disjoint. Then condition (2) holds with equality and \(C_*=1\). We will split the proof into two steps. First, we establish a lower bound on the minimax risk in the case of known structure \({\varvec{\eta }}\), i.e., when \(f\) belongs to the class \(\mathcal{F }_{\varvec{\eta }}(\varvec{W}(\beta ,L))\) for some known parameters \({\varvec{\eta }}\in \tilde{\mathcal{B }}\) and \(\beta ,L>0\). We will show that the minimax risk tends to zero with the rate not faster than \(m\varepsilon ^{4\beta /(2\beta +s)}\). In a second step, we will prove that if \({\varvec{\eta }}\) is unknown, then the minimax rate is bounded from below by \(ms\varepsilon ^2(1+\log (d/(sm^{1/s})))\) if the function \(f\) belongs to \(\mathcal{F }_{\varvec{\eta }}(\Theta )\) for a set \(\Theta \) spanned by the tensor products involving only the functions \(\varphi _1\) and \(\varphi _{-1}\) of various arguments.

4.1 Lower bound for known structure \({\varvec{\eta }}\)

Proposition 1

Let \(\{\varphi _{\varvec{j}}\}\) be the tensor-product trigonometric basis and let \(s,m,d\) be positive integers satisfying \(d\ge sm\). Assume that \(L\ge \varepsilon ^2\). Then there exists an absolute constant \(C>0\) such that

$$\begin{aligned} \inf _{\widehat{f}}\sup _{f\in \mathcal{F }_{{\varvec{\eta }}}(\varvec{W}(\beta ,L))}\mathbf{E}_f[\Vert \widehat{f}-f\Vert _2^2]\ge CmL^{s/(2\beta +s)}\varepsilon ^{4\beta /(2\beta +s)},\qquad \forall \,{\varvec{\eta }}\in \tilde{\mathcal{B }}_{s,m}^d. \end{aligned}$$

Proof

Without loss of generality assume that \(m = 1\). We will also assume that \(L=1\) (this is without loss of generality as well, since we can replace \(\varepsilon \) by \(\varepsilon /\sqrt{L}\) and by our assumption this quantity is \(<\)1). After a renumbering if needed, we can assume that \({\varvec{\eta }}\) is such that \(\eta _V=1\) for \(V=\{1,\ldots ,s\}\) and \(\eta _V=0\) for \(V\ne \{1,\ldots ,s\}\).

Let \(t\) be an integer not smaller than \(4\). Then, the set \(I\) of all multi-indices \({\varvec{k}}\in \mathbb{Z }^{s}\) satisfying \(|{\varvec{k}}|_\infty \le t\) is of cardinality \(|I|\ge 9\). For any \({\varvec{\omega }}=(\omega _k, k\in I) \in \{0,1\}^I\), we set \(f_{{\varvec{\omega }}}({\varvec{x}})=\gamma \sum _{{\varvec{k}}\in I} \omega _{\varvec{k}}\varphi _{{\varvec{k}}}(x_1,\ldots ,x_s)\), where \(\varphi _{{\varvec{k}}}(x_1,\ldots ,x_s)=\prod _{j=1}^s \varphi _{k_j}(x_j), {\varvec{k}}=(k_1,\dots ,k_s)\), is an element of the tensor-product trigonometric basis and \(\gamma >0\) is a parameter to be chosen later. In view of the orthonormality of the basis functions \(\varphi _{\varvec{k}}\), we have

$$\begin{aligned} \Vert f_{\varvec{\omega }}\Vert _2^2=\gamma ^2|{\varvec{\omega }}|_1, \quad \forall \ {\varvec{\omega }}\in \{0,1\}^I. \end{aligned}$$
(23)

Therefore, we have \(\sum _{{\varvec{k}}}|{\varvec{k}}|_\infty ^{2\beta }\theta _{\varvec{k}}[f_{\varvec{\omega }}]^2\le t^{2\beta } \Vert f_{\varvec{\omega }}\Vert _2^2\le t^{2\beta }\gamma ^2(2t+1)^s\le \gamma ^2(2t+1)^{2\beta +s}\). Thus, the condition \(\gamma ^2(2t+1)^{2\beta +s}\le 1\) ensures that all the functions \(f_{\varvec{\omega }}\) belong to \(W(\beta ,1)\).

Furthermore, for two vectors \({\varvec{\omega }}, {\varvec{\omega }}^{\prime }\in \{0,1\}^I\) we have \(\Vert f_{\varvec{\omega }}-f_{{\varvec{\omega }}^{\prime }}\Vert _2^2=\gamma ^2|{\varvec{\omega }}-{\varvec{\omega }}^{\prime }|_1\). Note that the entries of the vectors \({\varvec{\omega }},{\varvec{\omega }}^{\prime }\) are either 0 or 1, therefore the \(\ell _1\) distance between these vectors coincides with the Hamming distance. According to the Varshamov-Gilbert lemma [28, Lemma 2.9], there exists a set \(\varOmega \subset \{0,1\}^I\) of cardinality at least \(2^{|I|/8}\) such that it contains the zero element and the pairwise distances \(|{\varvec{\omega }}-{\varvec{\omega }}^{\prime }|_1\) are at least \(|I|/8\) for any pair \({\varvec{\omega }},{\varvec{\omega }}^{\prime }\in \varOmega \).

We can now apply Theorem 2.7 from [28] that asserts that if, for some \(\tau >0\), we have \(\min _{{\varvec{\omega }},{\varvec{\omega }}^{\prime }\in \varOmega } \Vert f_{\varvec{\omega }}-f_{{\varvec{\omega }}^{\prime }}\Vert _2\ge 2\tau >0\), and

$$\begin{aligned} \frac{1}{|\varOmega |}\sum \limits _{{\varvec{\omega }}\in \varOmega } \mathcal{K }(\mathbf{P}_{f_{\varvec{\omega }}},\mathbf{P}\!_0)\le \frac{\log |\varOmega |}{16}, \end{aligned}$$
(24)

where \(\mathcal{K }(\cdot ,\cdot )\) denotes the Kullback-Leibler divergence, then \(\inf _{\widehat{f}}\max _{{\varvec{\omega }}\in \varOmega }\mathbf{E}_{f_{\varvec{\omega }}}[\Vert \widehat{f}-f_{{\varvec{\omega }}}\Vert _2^2]\ge c^{\prime }\tau ^2\) for some absolute constant \(c^{\prime }>0\). In our case, we set \(\tau =\gamma \sqrt{ |I|/32}\). Combining (23) and the fact that the Kullback-Leibler divergence between the Gaussian measures \(\mathbf{P}_{f}\) and \(\mathbf{P}_g\) is given by \(\frac{1}{2}\varepsilon ^{-2}\Vert f-g\Vert _2^2\), we obtain \(\frac{1}{|\varOmega |}\sum _{{\varvec{\omega }}\in \varOmega } \mathcal{K }(\mathbf{P}_{f_{\varvec{\omega }}},\mathbf{P}\!_0)\le \frac{1}{2}{\varepsilon ^{-2}\gamma ^2 |I|}\,\). If \(\gamma ^2\le (\log 2)\varepsilon ^2/64\), then (24) is satisfied and \(\tau ^2 =\gamma ^2(2t+1)^s/32\) is a lower bound on the rate of convergence of the minimax risk.

To finish the proof, it suffices to choose \(t\in \mathbb{N }\) and \(\gamma >0\) satisfying the following three conditions: \(t\ge 4, \gamma ^2\le (2t+1)^{-2\beta -s}\) and \(\gamma ^2\le \varepsilon ^2\log (2)/64\). For the choice \(\gamma ^{-2}=(2t+1)^{2\beta +s}+\varepsilon ^{-2}64/\log (2)\) and\(t=[4\varepsilon ^{-2/(2\beta +s)}]\) all these conditions are satisfied and \(\tau ^2 \ge c_1\varepsilon ^{4\beta /(2\beta +s)}\) for some absolute positive constant \(c_1\).

4.2 Lower bound for unknown structure \({\varvec{\eta }}\)

Proposition 2

Let the assumptions of Proposition 1 be satisfied. Then there exists an absolute constant \(C^{\prime }>0\) such that

$$\begin{aligned} \inf _{\widehat{f}}\sup _{f\in \mathcal{F }_{s,m}(\varvec{W}(\beta ,L))}\mathbf{E}_f[\Vert \widehat{f}-f\Vert _2^2]\ge C^{\prime }\min \bigg \{L,\,ms\varepsilon ^2\log \bigg (\frac{8\,d}{sm^{1/s}}\bigg )\bigg \}\,. \end{aligned}$$

Proof

We use again Theorem 2.7 in [28] but with a choice of the finite subset of \(\mathcal{F }_{s,m}(\varvec{W}(\beta ,L))\) different from that of Proposition 1. First, we introduce some additional notation. For every triplet \((m,s,d)\in \mathbb{N }_*^3\) satisfying \(ms\le d\), let \(\mathcal{P }_{s,m}^d\) be the set of collections \(\pi =\{V_1,\ldots ,V_m\}\) such that each \(V_\ell \subseteq \{ 1,\dots ,d\}\) has exactly \(s\) elements and \(V_\ell \)’s are pairwise disjoint. We consider \(\mathcal{P }_{s,m}^d\) as a metric space with the distance \(\rho (\pi ,\pi ^{\prime })= \frac{1}{m}\sum _{\ell =1}^m \mathbf{1}(V_\ell \not \in \{V^{\prime }_1,\ldots ,V^{\prime }_m\})= \frac{|\pi \Delta \pi ^{\prime }|}{2m}\,\), where \(\pi ^{\prime }=\{V_1^{\prime },\ldots ,V_m^{\prime }\}\in \mathcal{P }_{s,m}^d\). It is easy to see that \(\rho (\cdot ,\cdot )\) is a distance bounded by \(1\).

For any \(\vartheta \in (0,1)\), let \(\mathcal{N }^d_{s,m}(\vartheta )\) denote the logarithm of the packing number, i.e., the logarithm of the largest integer \(K\) such that there are \(K\) elements \(\pi ^{(1)},\ldots ,\pi ^{(K)}\) of \(\mathcal{P }_{s,m}^d\) satisfying \(\rho (\pi ^{(k)},\pi ^{(k^{\prime })})\ge \vartheta \). To each \(\pi ^{(k)}\) we associate a family of functions \(\mathcal{U }=\{f_{k,{\varvec{\omega }}}:{\varvec{\omega }}\in \{-1,1\}^{ms}, \,k=1,\dots ,K\}\) defined by

$$\begin{aligned} f_{k,{\varvec{\omega }}}({\varvec{x}})=\frac{\tau }{\sqrt{m}} \sum _{V\in \pi ^{(k)}} \varphi _{{\varvec{\omega }},V}({\varvec{x}}_V), \end{aligned}$$

where \(\tau =(1/4)\min \big (\varepsilon \sqrt{ms\log 2+\log K}, \sqrt{L}\big )\) and \(\varphi _{{\varvec{\omega }},V}({\varvec{x}}_V)=\prod _{j\in V} \varphi _{\omega _j}(x_j)\). Using that \(\{\varphi _{{\varvec{j}}}\}\) is the tensor-product trigonometric basis it is easy to see that each \(f_{k,{\varvec{\omega }}}\) belongs to \(\mathcal{F }_{s,m}(\varvec{W}(\beta ,L))\). Next, \(|\mathcal{U }|=2^{ms}K\) and, for any \(f_{k,{\varvec{\omega }}}\in \mathcal{U }\), the Kullback-Leibler divergence between \(\mathbf{P}_{f_{k,{\varvec{\omega }}}}\) and \(\mathbf{P}\!_0\) is equal to \(\mathcal{K }(\mathbf{P}_{f_{k,{\varvec{\omega }}}},\mathbf{P}\!_0)=\frac{1}{2}\varepsilon ^{-2}\Vert f_{k,{\varvec{\omega }}}\Vert _2^2=\frac{\varepsilon ^{-2}\tau ^2}{2} \le \frac{\log |\mathcal{U }|}{16}\). Furthermore, the functions \(f_{k,{\varvec{\omega }}}\) are not too close to each other. Indeed, since \(\{\varphi _{{\varvec{j}}}\}\) is the tensor-product trigonometric basis we get that, for all \(f_{k,{\varvec{\omega }}},f_{k^{\prime },{\varvec{\omega }}^{\prime }}\in \mathcal{U }\),

$$\begin{aligned} \Vert f_{k,{\varvec{\omega }}}-f_{k^{\prime },{\varvec{\omega }}^{\prime }}\Vert _2^2&= \tau ^2m^{-1}\left(2m-\sum _{V\in \pi ^{(k)}}\sum _{V^{\prime }\in \pi ^{(k^{\prime })}} \int _{[0,1]^d}\varphi _{{\varvec{\omega }},V}({\varvec{x}}_V)\varphi _{{\varvec{\omega }}^{\prime },V^{\prime }}({\varvec{x}}_{V^{\prime }})\,d{\varvec{x}}\right)\\&\!\ge \! \tau ^2\left(2\!-\!\frac{1}{m}\sum _{V\in \pi ^{(k)}}\sum _{V^{\prime }\!\in \! \pi ^{(k^{\prime })}} \mathbf{1}(V\!=\!V^{\prime })\right) \!=\!2\tau ^2\rho (\pi ^{(k)},\pi ^{(k^{\prime })})\!\ge \! 2\vartheta \tau ^2. \end{aligned}$$

These remarks and Theorem 2.7 in [28] imply that

$$\begin{aligned} \inf _{\widehat{f}}\sup _{f\in \mathcal{U }}\mathbf{E}_f[\Vert \widehat{f}-f\Vert _2^2]&\ge c_3 \vartheta \tau ^2 = \frac{c_3\vartheta }{16} \min \Big \{L,\varepsilon ^2(ms\log 2+\log K)\Big \} \end{aligned}$$
(25)

for some absolute constant \(c_3>0\). Assume first that \(d<4sm^{1/s}\). Then \(ms\log 2\ge \frac{ms}{5}\log \big (\frac{8d}{sm^{1/s}}\big )\) and the result of the proposition is straightforward. If \(d\ge 4sm^{1/s}\) we fix \(\vartheta =1/8\) and use the following lemma (cf. the Appendix for a proof) to bound \(\log K=\mathcal{N }_{s,m}^d(\vartheta )\) from below.

Lemma 2

If \(d\ge 4sm^{1/s}\) and \(\vartheta \in (0,1/8]\), we have \(\mathcal{N }^d_{s,m}(\vartheta )\ge -m\log ({2es^{1/2}})+\frac{2ms}{3}\log \big (\frac{d}{sm^{1/s}}\big )\).

This yields

$$\begin{aligned} ms\log 2+\mathcal{N }_{s,m}^d(\vartheta )&\ge \frac{2ms}{3}\log \left(\frac{8d}{sm^{1/s}}\right)-m \log \left({2es^{1/2}}\right)\nonumber \\&= \frac{2ms}{3}\log \left(\frac{8d}{sm^{1/s}}\right) \left(1-\frac{s^{-1}\log \big ({2es^{1/2}}\big )}{\frac{2}{3}\log \big (\frac{8d}{sm^{1/s}}\big )}\right). \end{aligned}$$
(26)

It is easy to check that \(s^{-1}\log ({2es^{1/2}})\le 1.7,\) while for \(d\ge 4sm^{1/s}\) we have \(\frac{2}{3}\log \Big (\frac{8d}{sm^{1/s}}\Big )\ge 2.3.\) Combining these inequalities with (25) and (26) we get the result.

5 Discussion and outlook

We presented a new framework, called the compound functional model, for performing various statistical tasks such as prediction, estimation and testing in the context of high dimension. We studied the problem of estimation in this model from a minimax point of view when the data are generated by a Gaussian process. We established upper and lower bounds on the minimax risk that match up to a multiplicative constant. These bounds are nonasymptotic and are attained adaptively with respect to the macroscopic and microscopic sparsity parameters \(m\) and \(s\), as well as to the complexity of the atoms of the model. In particular, we improve in several aspects upon the existing results for the sparse additive model, which is a special case of the compound functional model (only for this case the rates were previously explicitly treated in the literature):

  • The exact expression for the optimal rate that we obtain reveals that the existing methods for the sparse additive model based on penalized least squares techniques have logarithmically suboptimal rates.

  • Unlike most previous work, we do not require restricted isometry type assumptions on the subspaces of the additive model; we need only a much weaker one-sided condition (2). Possible extensions to general compound model based on the existing literature would again suffer from the rate suboptimality and require such type of extra conditions.

  • When specialized to the sparse additive model, our results are adaptive with respect to the smoothness of the atoms, while all the previous work about the rates considered the smoothness (or the reproducing kernel) as given in advance.

For the general compound model, the main difficulty is in the proof of the lower bounds of the order \(ms \varepsilon ^2\log ( d/(s m^{1/s}))\) that are not covered by the standard tools such as the Varshamov-Gilbert lemma or \(k\)-selection lemma. Therefore, we developed here new tools for the lower bounds that can be of independent interest.

An important issue that remained out of scope of the present work but is undeniably worth studying is the possibility of achieving the minimax rates by computationally tractable procedures. Clearly, the complexity of exact computation of the procedure described in Sect. 3 scales as \(\varepsilon ^{-2m}2^{M_{d,s}}\), which is prohibitively large for typical values of \(d, s\) and \(m\). It is possible, however, to approximate our estimator by using a Markov Chain Monte-Carlo (MCMC) algorithm similar to that of [23, 24]. The idea is to begin with an initial state \(({\varvec{t}}_0,{\varvec{\eta }}_0)\) and to randomly generate a new candidate \(({\varvec{u}},{\varvec{\zeta }})\) according to the distribution \(q(\cdot |{\varvec{t}}_0,{\varvec{\eta }}_0)\), where \(q(\cdot |\cdot )\) is a given Markov kernel. Then, a Bernoulli random variable \(\xi \) with probability of the output 1 equal to \(\alpha = 1\wedge \frac{\widehat{\pi }({\varvec{u}},{\varvec{\zeta }})}{\widehat{\pi }({\varvec{t}},{\varvec{\eta }})}\frac{q({\varvec{t}},{\varvec{\eta }}|{\varvec{u}},{\varvec{\zeta }})}{q({\varvec{u}},{\varvec{\zeta }}|{\varvec{t}},{\varvec{\eta }})}\) is drawn and a new state \(({\varvec{t}}_1,{\varvec{\eta }}_1)=\xi \cdot ({\varvec{u}},{\varvec{\zeta }})+(1-\xi )\cdot ({\varvec{t}}_0,{\varvec{\eta }}_0)\) is defined. This procedure is repeated \(K\) times producing thus a realization \(\{({\varvec{t}}_k,{\varvec{\eta }}_k); k=0,\ldots ,K\}\) of a reversible Markov chain. Then, the average value \(\frac{1}{K}\sum _{k=1}^K \widehat{\varvec{\theta }}_{{\varvec{t}}_k,{\varvec{\eta }}_k}\) provides an approximation to the estimator \(\widehat{f}_\varepsilon \) defined in Sect. 3.

If \(s\) and \(m\) are small and \(q(\cdot |{\varvec{t}},{\varvec{\eta }}^{\prime })\) is such that all the mass of this distribution is concentrated on the nearest neighbors of the \({\varvec{\eta }}^{\prime }\) in the hypercube of \(2^{M_{d,s}}\) all possible \({\varvec{\eta }}\)’s, then the computations can be performed in a polynomial time. For example, if \(s=2\), i.e., if we allow only pairwise interactions, each step of the algorithm requires \(\sim \varepsilon ^{-2m}d^2\) computations, where the factor \(\varepsilon ^{-2m}\) can be reduced to a power of \(\log (\varepsilon ^{-2})\) by a suitable modification of the estimator. How fast such MCMC algorithms converge to our estimator and what is the most appealing choice for the Markov kernel \(q(\cdot |\cdot )\) are challenging open questions for future research.