Abstract
We address the structure identification and the uniform approximation of two fully nonlinear layer neural networks of the type \(f(x)=1^T h(B^T g(A^T x))\) on \(\mathbb R^d\), where \(g=(g_1,\dots , g_{m_0})\), \(h=(h_1,\dots , h_{m_1})\), \(A=(a_1|\dots |a_{m_0}) \in \mathbb R^{d \times m_0}\) and \(B=(b_1|\dots |b_{m_1}) \in \mathbb R^{m_0 \times m_1}\), from a small number of query samples. The solution of the case of two hidden layers presented in this paper is crucial as it can be further generalized to deeper neural networks. We approach the problem by sampling actively finite difference approximations to Hessians of the network. Gathering several approximate Hessians allows reliably to approximate the matrix subspace \(\mathcal W\) spanned by symmetric tensors \(a_1 \otimes a_1,\dots ,a_{m_0}\otimes a_{m_0}\) formed by weights of the first layer together with the entangled symmetric tensors \(v_1 \otimes v_1 ,\dots ,v_{m_1}\otimes v_{m_1}\), formed by suitable combinations of the weights of the first and second layer as \(v_\ell =A G_0 b_\ell /\Vert A G_0 b_\ell \Vert _2\), \(\ell \in [m_1]\), for a diagonal matrix \(G_0\) depending on the activation functions of the first layer. The identification of the 1-rank symmetric tensors within \(\mathcal W\) is then performed by the solution of a robust nonlinear program, maximizing the spectral norm of the competitors constrained over the unit Frobenius sphere. We provide guarantees of stable recovery under a posteriori verifiable conditions. Once the 1-rank symmetric tensors \(\{a_i \otimes a_i, i\in [m_0]\}\cup \{v_\ell \otimes v_\ell , \ell \in [m_1] \}\) are computed, we address their correct attribution to the first or second layer (\(a_i\)’s are attributed to the first layer). The attribution to the layers is currently based on a semi-heuristic reasoning, but it shows clear potential of reliable execution. Having the correct attribution of the \(a_i,v_\ell \) to the respective layers and the consequent de-parametrization of the network, by using a suitably adapted gradient descent iteration, it is possible to estimate, up to intrinsic symmetries, the shifts of the activations functions of the first layer and compute exactly the matrix \(G_0\). Eventually, from the vectors \(v_\ell =A G_0 b_\ell /\Vert A G_0 b_\ell \Vert _2\)’s and \(a_i\)’s one can disentangle the weights \(b_\ell \)’s, by simple algebraic manipulations. Our method of identification of the weights of the network is fully constructive, with quantifiable sample complexity and therefore contributes to dwindle the black box nature of the network training phase. We corroborate our theoretical results by extensive numerical experiments, which confirm the effectiveness and feasibility of the proposed algorithmic pipeline.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Deep learning is perhaps one of the most sensational scientific and technological developments in the industry of the last years. Despite the spectacular success of deep neural networks (NN) outperforming other pattern recognition methods, achieving even superhuman skills in some domains [12, 36, 57] and confirmations of empirical successes in other areas such as speech recognition [27], optical character recognition [8], games solution [44, 55], the mathematical understanding of the technology of machine learning is in its infancy. This is not only unsatisfactory from a scientific, especially mathematical point of view, but it also means that deep learning currently has the character of a black box method, and its success can not be ensured yet by a full theoretical explanation. This leads to lack of acceptance in many areas, where interpretability is a crucial issue (like security, cf. [10]) or for those applications where one wants to extract new insights from data [60].
Several general mathematical results on neural networks have been available since the 1990s [2, 17, 38, 39, 46,47,48], but deep neural networks have special features and in particular superior properties in applications that still cannot be fully explained from the known results. In recent years, new interesting mathematical insights have been derived for understanding approximation properties (expressivity) [19, 53] and stability properties [9, 67] of deep neural networks. Several other crucial and challenging questions remain open.
A fundamental one is about the number of required training data to obtain a good neural network, i.e., achieving small generalization errors for future data. Classical statistical learning theory splits this error into bias and variance and gives general estimations by means of the so-called VC dimension or Rademacher complexity of the used class of neural networks [54]. However, the currently available estimates of these parameters [26] provide very pessimistic barriers in comparison to empirical success. In fact, the trade-off between bias and variance is function of the complexity of a network, which should be estimated by the number of sampling points to identify it uniquely. Thus, on the one hand, it is of interest to know which neural networks can be uniquely determined in a stable way by finitely many training points. On the other hand, the unique identifiability is clearly a form of interpretability.
The motivating problem of this paper is the robust and resource-efficient identification of feedforward neural networks. Unfortunately, it is known that identifying a very simple (but general enough) neural network is indeed NP-hard [7, 33]. Even without invoking fully connected neural networks, recent work [22, 41] showed that even the training of one single neuron (ridge function or single index model) can show any possible degree of intractability, depending on the distribution of the input. Recent results [3, 34, 42, 52, 56], on the other hand, are more encouraging and show that minimizing a square loss of a (deep) neural network does not have in general or asymptotically (for large number of neurons) poor local minima, although it may retain the presence of critical saddle points.
In this paper, we present conditions for a fully nonlinear two-layer neural network to be provably identifiable with a number of samples, which is polynomially depending on the dimension of the network. Moreover, we prove that our procedure is robust to perturbations. Our result is clearly of theoretical nature, but also fully constructive and easily implementable. To our knowledge, this work is the first, which allows provable de-parametrization of the problem of deep network identification, beyond the simpler case of shallow (one hidden) layer neural networks already considered in very recent literature [3, 23, 32, 34, 42, 43, 52, 56]. For the implementation, we do not require black box high-dimensional optimization methods and no concerns about complex energy loss landscapes need to be addressed, but only classical and relatively simple calculus and linear algebra tools are used (mostly function differentiation and singular value decompositions). The results of this paper build upon the work [22, 23], where the approximation from a finite number of sampling points has been already derived for the single neuron and one-layer neural networks. The generalization of the approach of the present paper to networks with more than two hidden layers is surprisingly simpler than one may expect, and it is in the course of finalization [21], see Sect. 5 (v) below for some details.
1.1 Notation
Let us collect here some notation used in this paper. Given any integer \(m \in \mathbb N\), we use the symbol \([m]:=\{1,2,\dots ,m \}\) for indicating the index set of the first m integers. We denote by \(B_1^d\) the Euclidean unit ball in \(\mathbb R^d\), \(\mathbb S^{d-1}\) the Euclidean sphere, and \(\mu _{\mathbb S^{d-1}}\) is its uniform probability measure. We denote \(\ell _q^d\) the d-dimensional Euclidean space endowed with the norm \(\Vert x\Vert _{\ell _q^d} =\left( \sum _{j=1}^d |x_j|^q \right) ^{1/q}\). For \(q=2\), we often write indifferently \(\Vert x\Vert = \Vert x\Vert _{2}= \Vert x\Vert _{\ell _2^d}\). For a matrix M, we denote \(\sigma _k(M)\) its \(k^{th}\) singular value. We denote \(\mathbb S\) the sphere of symmetric matrices of unit Frobenius norm \(\Vert \cdot \Vert _F\). The spectral norm of a matrix is denoted \(\Vert \cdot \Vert \). Given a closed convex set C, we denote \(P_C\) the orthogonal projection operator onto C. (Sometimes we use such operators to project onto subspaces of \(\mathbb R^d\) or subspaces of symmetric matrices or onto balls of such spaces.) For vectors \(x_1,\dots , x_k \in \mathbb R^d\), we denote the tensor product \(x_1 \otimes \dots \otimes x_k\) as the tensor of entries \(({x_1}_{i_1} \dots {x_k}_{i_k})_{i_1,\dots ,i_k}\). For the case of \(k=2\), the tensor product \(x \otimes y\) of two vectors \(x,y \in \mathbb R^d\) equals the matrix \(x y^T = (x_i y_j)_{ij}\). For any matrix \(M \in \mathbb {R}^{m \times n}\)
is its vectorization, which is the vector created by the stacked columns of M.
1.2 From One Artificial Neuron to Shallow and Deeper Networks
1.2.1 Meet the Neuron
The simplest artificial neural network \(f:\Omega \subset \mathbb {R}^d \rightarrow \mathbb {R}\) is a network consisting of exactly one artificial neuron, which is modeled by a ridge-function (or single-index model) f as
where \(g:\mathbb {R}\rightarrow \mathbb {R}\) is the shifted activation function \(\phi ( \cdot + \theta )\) and the vector \(a \in \mathbb {R}^d\) expresses the weight of the neuron. Since the beginning of the 1990s [30, 31], there is a vast mathematical statistics literature about single-index models, which addresses the problem of approximating a and possibly also g from a finite number of samples of f to yield an expected least-squares approximation of f on a bounded domain \(\Omega \subset \mathbb {R}^d\). Now assume for the moment that we can evaluate the network f at any point in its domain, we refer to this setting as active sampling. As we aim at uniform approximations, we adhere here to the language of recent results about the sampling complexity of ridge functions from the approximation theory literature, e.g., [13, 22, 41]. In those papers, the identification of the neuron is performed by using approximate differentiation. Let us clarify how this method works as it will be of inspiration for the further developments below. For any \(\epsilon >0\), points \(x_i\), \(i=1,\dots m_{\mathcal X}\), and differentiation directions \(\varphi _j\), \(j=1,\dots m_{\Phi }\) we have
Hence, differentiation exposes the weight of a neuron and allows to test it against test vectors \(\varphi _j\). The approximate relationship (1) forms for every fixed index i a linear system of dimensions \(m_\Phi \times d\), whose unknown is \(x^*_i=g'(a^T x_i) a\). Solving approximately and independently, the systems for \(i=1,\dots m_{\mathcal X}\) yield multiple approximations \(\hat{a}=x^*_i/\Vert x^*_i\Vert _2\approx a\) of the weight, the most stable of them with respect to the approximation error in (1) is the one for which \(\Vert x^*_i\Vert _2\) is maximal. Once \(\hat{a} \approx a\) is learned, then one can easily construct a function \(\hat{f}(x) = \hat{g}(\hat{a}^T x)\) by approximating \(\hat{g}(t) \approx f(\hat{a} t)\) on further sampling points. Under assumptions of smoothness of the activation function \(g\in C^s([0,1])\), for \(s>1\), \(g'(0) \ne 0\) and compressibility of the weight, i.e., \(\Vert a\Vert _{\ell _q^d}\) is small for \(0<q \le 1\), then by using L sampling points of the function f and the approach sketched above, one can construct a function \(\hat{f}(x) = \hat{g}(\hat{a}^T x)\) such that
In particular, the result constructs the approximation of the neuron with an error, which has polynomial rate with respect to the number of samples, depending on the smoothness of the activation function and the compressibility of the weight vector a. The dependence on the input dimension is only logarithmical. To take advantage of the compressibility of the weight, compressive sensing [24] is a key tool to solve the linear systems (1). In [13] , such an approximation result was obtained by active and deterministic choice of the input points \(x_i\). In order to relax a bit the usage of active sampling, in the paper [22] a random sampling of the points \(x_i\) has been proposed and the resulting error estimate would hold with high probability. The assumption \(g'(0) \ne 0\) is somehow crucial, since it was pointed out in [22, 41] that any level of tractability (polynomial complexity) and intractability (super-polynomial complexity) of the problem may be exhibited otherwise.
1.2.2 Shallow Networks: The One-Layer Case
Combining several neurons leads to richer function classes [38, 39, 46,47,48]. A neural network with one hidden layer and one output is simply a weighted sum of neurons whose activation function only differs by a shift, i.e.,
where \(a_i \in \mathbb {R}^m\) and \(b_i, \theta _i \in \mathbb {R}\) for all \(i = 1, \dots , m\). Sometimes, it may be convenient below the more compact writing \(f(x) =1^T g(A^T x)\) where \(g=(g_1,\dots , g_m)\) and \(A=[a_1|\dots |a_m] \in \mathbb R^{d \times m}\).Footnote 1 Differently from the case of the single neuron, the use of first-order differentiation
may furnish information about \(A = {\text {span}}\left\{ a_1, \dots , a_m \right\} \) (active subspace identification [14, 15], see also [22, Lemma 2.1]), but it does not allow yet to extract information about the single weights \(a_i\). For that higher-order information is needed. Recent work shows that the identification of a network (2) can be related to tensor decompositions [1, 23, 32, 43]. As pointed out in Sect. 1.2.1, differentiation exposes the weights. In fact, one way to relate the network to tensors and tensor decompositions is given by higher-order differentiation. In this case, the tensor takes the form
which requires that the \(g_i\)’s are sufficiently smooth. In a setting where the samples are actively chosen, it is generally possible to approximate these derivatives by finite differences. However, even for passive sampling there are ways to construct similar tensors [23, 32], which rely on Stein’s lemma [58] or differentiation by parts or weak differentiation. Let us explain how passive sampling in this setting may be used for obtaining tensor representations of the network. If the probability measure of the sampling points \(x_i\)’s is \(\mu _X\) with known (or approximately known [18]) density p(x) with respect to the Lebesgue measure, i.e., \(d \mu _X(x) = p(x) dx\), then we can approximate the expected value of higher order derivatives by using exclusively point evatuations of f. This follows from
In the work [32], decompositions of third-order symmetric tensors (\(k=3\)) [1, 35, 51] have been used for the weights identification of one hidden layer neural networks. Instead, beyond the classical results about principal Hessian directions [37], in [23] it is shown that using second derivatives \((k=2)\) actually suffices and the corresponding error estimates reflect positively the lower order and potential of improved stability, see e.g., [16, 28, 29]. The main part of the present work is an extension of the latter approach, and therefore, we will give a short summary of it with emphasis on active sampling, which will be assumed in this paper as the sampling method. The first step of the approach in [23] is taking advantage of (3) to reduce the dimensionality of the problem from d to m.
Reduction to the active subspace Before stating the core procedure, we want to introduce a simple and optional method, which can help to reduce the problem complexity in practice. Assume \(f: \Omega \subset \mathbb {R}^d \rightarrow \mathbb {R}\) takes the form (2), where \(d \ge m\) and that \(a_1, \dots , a_m\in \mathbb {R}^d\) are linearly independent. From a numerical perspective the input dimension d of the network plays a relevant role in terms of complexity of the procedure. For this reason in [23], the input dimension is effectively reduced to the number of neurons in the first hidden layer. With this reasoning, in the sections that follow we also consider networks where the input dimension matches the number of neurons of the first hidden layer.
Assume for the moment that the active subspace \(A={\text {span}}\left\{ a_1, \dots , a_m \right\} \) is known. Let us choose any orthonormal basis of A and arrange it as the columns of a matrix \(\hat{A} \in \mathbb {R}^{d \times m}\). Then
which can be used to define a new network
whose weights are \(\alpha _1 = \hat{A}^T a_1, \dots , \alpha _m = \hat{A}^T a_m\); all the other parameters remain unchanged. Note that \(\hat{A} \alpha _i = P_A a_i = a_i\), and therefore, \(a_i\) can be recovered from \(\alpha _i\). In summary, if the active subspace of f is approximately known, then we can construct \(\hat{f}\), such that the identification of f and \(\hat{f}\) are equivalent. This allows us to reduce the problem to the identification of \(\hat{f}\) instead of f, under the condition that we approximate \(P_A\) well enough [23, Theorem 1.1]. As recalled in (3), we can produce easily approximations to vectors in A by approximate first order differentiation of the original network f and, in an ideal setting, generating m linear independent gradients would suffice to approximate A. However, in general, there is no way to ensure a priori such linear independence and we have to account for the error caused by approximating gradients by finite differences. By suitable assumptions on f (see the full rank condition on the matrix J[f] defined in (4) below) and using Algorithm 1, we obtain the following approximation result.
![figure a](http://media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs00365-021-09550-5/MediaObjects/365_2021_9550_Figa_HTML.png)
Theorem 1
([23], Theorem 2.2) Assume the vectors \((a_i)_{i=1}^m\) are linear independent and of unit norm. Additionally, assume that the \(g_i\)’s are smooth enough. Let \(P_{\hat{A}}\) be constructed as described in Algorithm 1 by sampling \(m_{X} (d+1)\) values of f. Let \(0<s<1\), and assume that the matrix
has full rank, i.e., its m-th singular value fulfills \(\sigma _m\left( J[f]\right) \ge \alpha >0\). Then
with probability at least \(1 - m \exp \Bigl (-\frac{m_{X}\alpha s^2 }{2 m^2 C_2^2}\Bigr )\), where \(C_1, C_2 > 0\) are absolute constants depending on the smoothness of \(g_i\)’s.
Identifying the weights As clarified in the previous section, we can assume from now on that \(d=m\) without loss of generality. Let f be a network of the type (2), with twice differentiable activation functions \((g_i)_{i=1,\dots , m}\), and independent weights \((a_i)_{i=1,\dots m} \in \mathbb {R}^m\) of unit norm. Then f has second derivative
whose expression represents a nonorthogonal rank-1 decomposition of the Hessian. The idea is, first of all, to modify the network by an ad hoc linear transformation (withening) of the input
in such a way that \((Wa_i/\Vert Wa_i\Vert _2)_{i=1,\dots , m}\) forms an orthonormal system. The computation of W can be performed by spectral decomposition of any positive definite matrix
In fact, from the spectral decomposition of \(G = UDU^T\), we define \(W = D^{-\frac{1}{2}}U^T\) (see [23, Theorem 3.7]). This procedure is called whitening and allows to reduce the problem to networks with nearly-orthogonal weights, and presupposes to have obtained \( \hat{\mathcal{A}}\approx \mathcal{A}={\text {span}}\left\{ a_1 \otimes a_1, \dots , a_m \otimes a_m\right\} \). By using (5) and a similar approach as Algorithm 1 (one simply substitutes there the approximate gradients with vectorized approximate Hessians), one can compute \(\hat{\mathcal{A}}\) under the assumption that also the second-order matrix
is of full rank, where \({\text {vec}}(\nabla ^2 f (x))\) is the vectorization of the Hessian \(\nabla ^2 f (x)\).
After whitening one could assume without loss of generality that the vectors \((a_i)_{i=1,\dots m} \in \mathbb {R}^m\) are nearly orthonormal in the first place. Hence the representation (5) would be a near spectral decomposition of the Hessian and the components \(a_i \otimes a_i\) would represent the approximate eigenvectors. However, the numerical stability of spectral decompositions is ensured only under spectral gaps [6, 50]. In order to maximally stabilize the approximation of the \(a_i\)’s, one seeks for matrices \(M \in \hat{\mathcal{A}}\) with the maximal spectral gap between the first and second largest eigenvalues. This is achieved by the maximizers of the following nonconvex program
where \(\Vert \cdot \Vert \) and \(\Vert \cdot \Vert _F\) are the spectral and Frobenius norms, respectively. This program can be solved by a suitable projected gradient ascent, see for instance [23, Algorithm 3.4] and Algorithm 3, and any resulting maximizer has the eigenvector associated to the largest eigenvalue in absolute value close to one of the \(a_i\)’s. Once approximations \(\hat{a}_i\) to all the \(a_i\)’s are retrieved, it is not difficult to perform the identification of the activation functions \(g_i\), see [23, Algorithm 4.1, Theorem 4.1]. The recovery of the network resulting from this algorithmic pipeline is summarized by the following statement.
Theorem 2
([23], Theorem 1.2) Let f be a real-valued function defined on the neighborhood of \(\Omega =B_1^d\), which takes the form
for \(m\le d\). Let \(g_i\) be three times continuously differentiable on a neighborhood of \([-1,1]\) for all \(i=1,\dots ,m\), and let \(\{a_1,\dots ,a_m\}\) be linearly independent. We additionally assume both J[f] and H[f] of maximal rank m. Then, for all \(\epsilon >0\) (stepsize employed in the computation of finite differences), using at most \(m_{\mathcal X} [(d+1)+ (m+1)(m+2)/2]\) random exact point evaluations of f, the nonconvex program (6) constructs approximations \(\{\hat{a}_1,\dots ,\hat{a}_m\}\) of the weights \(\{a_1,\dots ,a_m\}\) up to a sign change for which
with probability at least \(1 - m \exp \Bigl (-\frac{m_{\mathcal X}c }{2 \max \{C_1,C_2\}^2 m^2}\Bigr )\), for a suitable constant \(c>0\) intervening (together with some fixed power of m) in the asymptotical constant of the approximation (7). Moreover, once the weights are retrieved one constructs an approximating function \(\hat{f}:B_1^d \rightarrow \mathbb R\) of the form
such that
While this result have been generalized to the case of passive sampling in [23] and through whitening allows for the identification of nonorthogonal weights, it is restricted to the case of \(m \le d\) and linearly independent weights \(\{a_i: i=1,\dots ,m\}\).
The main goal of this paper is generalizing this approach to account for both the identification of two fully nonlinear hidden layer neural networks and the case where \(m > d\) and the weights are not necessarily nearly orthogonal or even linearly independent (see Remark 10 below).
1.2.3 Deeper Networks: The Two-Layer Case
What follows further extends the theory discussed in the previous sections to a wider class of functions, namely neural networks with two hidden layers. By doing so, we will also address a relevant open problem that was stated in [23], which deals with the identification of shallow neural networks where the number of neurons is larger than the input dimension. First, we need a precise definition of the architecture of the neural networks we intend to consider.
Definition 3
Let \(0 < m_1\le m_0\le d\), \(\{a_1,\ldots ,a_{m_0}\}\subset \mathbb {S}^{d-1}\), \(\{b_1,\ldots ,b_{m_1}\}\subset \mathbb {S}^{m_0-1}\), and let \(g_1,\ldots ,g_{m_0}\) and \(h_1,\ldots ,h_{m_1}\) be univariate functions. Denote \(A := (a_1|\ldots |a_{m_0}) \in \mathbb {R}^{d\times m_0}\), \(B := (b_1|\ldots |b_{m_1}) \in \mathbb {R}^{m_0\times m_1}\), \(G_0 := {\text {diag}}\big ( g_1'(0),\ldots ,g_{m_0}'(0)\big )\), and assume the following:
-
(A1)
\(g_i'(0) \ne 0\) for all \( i \in [m_0]\),
-
(A2)
the system \(\{a_1,\ldots ,a_{m_0}, v_1,\ldots ,v_{m_1}\} \subset \mathbb {S}^{d-1}\) with \(v_\ell := \frac{AG_0 b_\ell }{\left\| {AG_0 b_\ell }\right\| }\) satisfies a frame condition
$$\begin{aligned}&c_f\left\| {x}\right\| ^2 \le \sum \limits _{i=1}^{m_0} \left\langle x, a_i\right\rangle ^2 + \sum \limits _{\ell =1}^{m_1} \left\langle x, v_\ell \right\rangle ^2 \le C_F\left\| {x}\right\| ^2 \nonumber \\&\quad \text { for } 0< c_f\le c_F \text { and all } x \in {\text {span}}\{a_1,\dots ,a_{m_0}\}, \end{aligned}$$(9) -
(A3)
the derivatives of \(g_i\) and \(h_\ell \) are uniformly bounded according to
$$\begin{aligned} \quad \max \limits _{i=1,\ldots ,m_0}\sup \limits _{t \in \mathbb {R}}\left| {g_i^{(k)}(t)}\right| \le \kappa _{k}, \quad \text { and }\quad \max \limits _{i=1,\ldots ,m_1}\sup \limits _{t \in \mathbb {R}}\left| {h_\ell ^{(k)}(t)}\right| \le \eta _{k},\quad k=0,1,2,3. \end{aligned}$$
Then we define a set of two-layer networks by
where \(b_{i\ell }\) is the \((i,\ell )\)-th entry of B, respectively, the i-th entry of vector \(b_\ell \).
Sometimes it may be convenient to use the more compact writing \(f(x) =1^T h(B^Tg(A^T x))\) where \(g=(g_1,\dots , g_{m_0})\), \(h=(h_1,\dots , h_{m_1})\). In the previous section, we presented a dimension reduction that can be applied to one layer neural networks and which can be useful to reduce the dimensionality from the input dimension to the number of neurons of the first layer. The same approach can be applied to networks defined by the class \(\mathcal{F}(d,m_0, m_1)\). For the approximation error of the active subspace, we end up with the following corollary of Theorem 1.
Corollary 4
(cf. Theorem 1) Assume that \(f \in \mathcal{F}(d,m_0, m_1)\) and let \(P_{\hat{A}}\) be constructed as described in Algorithm 1 by sampling \(m_X(d+1)\) values of f. Let \(0<s<1\) and assume that the \(m_0\)-th singular value of J[f] fulfills \( \sigma _{m_0}\left( J[f]\right) \ge \alpha >0\). Then we have
with probability at least \(1-m_0\exp (-\frac{s^2 m_x \alpha }{2 C_4 m_1})\) and constants \(C_3, C_4 > 0\) that depend only on \(\kappa _j, \eta _j\) for \(j=0,\dots ,3\).
In view of Corollary 4, we can again apply [23, Theorem 1.1] and assume, without loss of generality, that \(d=m_0\), for which frame condition (9) automatically implies invertibility of A, as the vectors \(v_\ell \)’s are linear combinations of the \(a_i\)’s.
2 Approximating the Span of Tensors of Weights
In the one layer case, which was described earlier, the unique identification of the weights is made possible by constructing a matrix space whose rank-1 basis elements are outer products of the weight profiles of the network. This section illustrates the extension of this approach beyond shallow neural networks. Once again, we will make use of differentiation, and overall there will be many parallels to the approach in [23]. However, the intuition behind the matrix space will be less straightforward, because we cannot anymore directly express the second derivative of a two-layer network as a linear combination of symmetric rank-1 matrices. This is due to the fact that the Hessian matrix of a network \(f \in \mathcal{F}(m_0, m_0, m_1)\) has the form
Therefore, \(\nabla ^2 f(x) \in {\text {span}}{\lbrace a_i\otimes a_j + a_j\otimes a_i \;\vert \;i,j = 1,\dots ,m_0\rbrace }\), which has dimension \(\frac{m_0(m_0+1)}{2}\) and is in general not spanned by symmetric rank-1 matrices. This expression is indeed quite complicated, due to the chain rule and the mixed tensor contributions, which are consequently appearing. At a first look, it would seem impossible to use a similar approach as the one for shallow neural networks recalled in the previous section. Nevertheless a relatively simple algebraic manipulation allows to recognize some useful structure: For a fixed \(x \in \mathbb {R}^{m_0}\), we rearrange the expression as
which is a combination of symmetric rank-1 matrices since \(\sum ^{m_0}_{j=1} b_{j\ell }g_j'(a_j^T x)a_j \in \mathbb {R}^{m_0}\). We write the latter expression more compactly by introducing the notation
where \(G_x = {\text {diag}}\left( g_1'(a_1^T x), \dots , g_{m_0}'(a_{m_0}^T x)\right) \in \mathbb {R}^{m_0\times m_0}\) and
Illustration of the relationship between \(\mathcal{W}\) (black line) and (light blue region) given by two nonlinear cones that fan out from \(\nabla ^2 f(0)\). There is no reason to believe that the these cones are symmetric around \(\mathcal{W}\). The gray cones show the maximal deviation of \(\hat{\mathcal{W}}\) from \(\mathcal{W}\)
Let us now introduce the fundamental matrix space
where the \({\text {span}}\) is taken over \(\mathbb {R}\), \(a_1, \dots , a_{m_0}\) are the weight profiles of the first layer, and
encode “entangled” information between A and B. For this reason, we call the \(v_\ell \)’s entangled weights. Let us stress at this point that the definition and the constructive approximation of the space \(\mathcal{W}\) is perhaps the most crucial and relevant contribution of this paper. In fact, by inspecting carefully the expression (10), we immediately notice that \(\nabla ^2 f(0) \in \mathcal{W}\), and also that the first sum in (10), namely \(\sum _{i=1}^{m_0} \beta _i(x) a_i\otimes a_i\), lies in \(\mathcal{W}\) for all \(x \in \mathbb {R}^{m_0}\). Moreover, for arbitrary sampling points x, deviations of \(\nabla ^2 f(x)\) from \(\mathcal{W}\) are only due to the second term in (10). The intuition is that for suitably centered distributions of sampling points \(x_1,\ldots ,x_{m_X}\), with \(a_j^T x_i \approx 0\) so that \(G_{x_i} \approx G_0\), the Hessians \(\{\nabla ^2 f(x_i): i \in [m_X]\}\) are distributed around the space \(\mathcal{W}\), see Fig. 1 for a two dimensional sketch of the geometrical situation. Hence, we would attempt an approximation of \(\mathcal{W}\) by PCA of a collection of such approximate Hessians. Practically, by active sampling (targeted evaluations of the network f) we first construct estimates \(\{\Delta _\epsilon ^2 f(x_i) : i \in [m_X]\}\) by finite differences of the Hessian matrices \(\{\nabla ^2 f(x_i) : i \in [m_X]\}\) (see Sect. 2.1), at sampling points \(x_1, \dots , x_{m_X} \in \mathbb {R}^m\) drawn independently from a suitable distribution \(\mu _X\). Next, we define the matrix
whose columns are the vectorization of the approximate Hessians. Finally, we produce the approximation \(\hat{\mathcal{W}}\) to \(\mathcal{W}\) as the span of the first \(m_0+m_1\) left singular vectors of the matrix \(\hat{W}\), where the choice \(m_0+m_1\) enforces \(\dim (\hat{\mathcal{W}}) = \dim (\mathcal{W}) = m_0+m_1\). The whole procedure of calculating \(\hat{\mathcal{W}}\) is given in Algorithm 2. It should be clear that the choice of \(\mu _X\) plays a crucial role for the quality of this method. In the analysis that follows, we focus on distributions that are centered and concentrated. Figure 1 helps to form a better geometrical intuition of the result of the procedure. It shows the region covered by the Hessians, indicated by the light blue area, which envelopes the space \(\mathcal{W}\) in a sort of nonlinear/nonconvex cone originating from \(\nabla ^2 f(0)\). In general, the Hessians do not concentrate around \(\mathcal{W}\) in a symmetric way, which means that the “center of mass” of the Hessians can never be perfectly aligned with the space \(\mathcal{W}\), regardless of the number of samples. In this analogy, the center of mass is equivalent to the space estimated by Algorithm 2, which essentially is a noncentered principal component analysis of observed Hessian matrices. The primary result of this section is Theorem 5, which provides an estimate of the approximation error of Algorithm 2 depending on the sub-Gaussian norm of the sample distribution \(\mu _X\) and the number of neurons in the respective layers. More precisely, this result gives a precise worst case estimate of the error caused by the imbalance of mass. For reasons mentioned above, the error does not necessarily vanish with an increasing number of samples, but the probability under which the statement holds will tend to 1. In Fig. 1, the estimated region is illustrated by the gray cones that show the maximal, worst-case deviation of \(\hat{\mathcal{W}}\). One crucial condition for Theorem 5 to hold is that there exists an \(\alpha >0\) such that
This assumption ensures that the space spanned by the observed Hessians has, in expectation, at least dimension \(m_0+ m_1\). Aside from this technical aspect this condition implicitly helps to avoid network configurations, which are reducible, for certain weights can not be recovered. For example, we can define a network in \(\mathcal{F}(2,2,1)\) with weights given by
It is easy to see that \(a_2\) will never be used during a forward pass through the network, which makes it impossible to recover \(a_2\) from the output of the network.
In the theorem below and in the proofs that follow, we will make use of the sub-Gaussian norm \(\left\| {\cdot }\right\| _{\psi _2}\) of a random variable. This quantity measures how fast the tails of a distribution decay and such a decay plays an important role in several concentration inequalities. More in general, for \(p\ge 1\), the \(\psi _p\)-norm of a scalar random variable Z is defined as
For a random vector X on \(\mathbb {R}^d\) the \(\psi _p\)-norm is given by
The random variables for which \(\left\| {X}\right\| _{\psi _1} <\infty \) are called subexponential and those for which \(\left\| {X}\right\| _{\psi _2} <\infty \) are called sub-Gaussian. More in general, the Orlicz space \(L_{\psi _p}= L_{\psi _p}(\Omega , \Sigma , \mathbb P)\) consists of all real random variables X on the probabillity space \((\Omega , \Sigma , \mathbb P)\) with finite \(\left\| {X}\right\| _{\psi _p}\) norm and its elements are called p-subexponential random variables. Below, we mainly focus on sub-Gaussian random variables. In particular, every bounded random variable is sub-Gaussian, which covers all the cases we discuss in this work. We refer to [65] for more details. One example of a sub-Gaussian distribution is the uniform distribution on the unit sphere, \(X \sim {{\,\mathrm{Unif}\,}}(\mathbb {S}^{d-1})\), which has sub-Gaussian norm \(\left\| {X}\right\| _{\psi _2} = \frac{1}{\sqrt{d}}\).
![figure b](http://media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs00365-021-09550-5/MediaObjects/365_2021_9550_Figb_HTML.png)
Theorem 5
Let \(f \in \mathcal{F}(m_0, m_0, m_1)\) be a neural network within the class described in Definition 3 and consider the space \(\mathcal{W}\) as defined in (14). Assume that \(\mu _X\) is a probability measure with \(\mathrm{supp\ }(\mu _X) \subset B^{m_0}_1\), \(\mathbb {E}X = 0\), and that there exists an \(\alpha > 0\) such that
Then, for any \(\epsilon >0\), Algorithm 2 returns a projection \(P_{\hat{\mathcal{W}}}\) that fulfills
for a suitable subspace \(\mathcal{W}^* \subset \mathcal{W}\) (we can actually assume that \(\mathcal{W}^* = \mathcal{W}\) according to Remark 6 below) with probability at least
where \(c>0\) is an absolute constant and \(C, C_1, C_{\Delta } >0 \) are constants depending on the constants \(\kappa _j, \eta _j\) for \(j=0,\dots ,3\).
Remark 6
If \(\epsilon >0\) is sufficiently small, due to (15) the space \(\hat{\mathcal{W}}\) returned by Algorithm 2 has dimension \(m_0+ m_1\). If the error bound (16) in Theorem 5 is such that \(\left\| {P_{\mathcal{W}^*} -P_{\hat{\mathcal{W}}}}\right\| _F < 1\), then \(\hat{\mathcal{W}}\) and \(\mathcal{W}^*\) must have the same dimension. Moreover, \(\mathcal{W}^* \subset \mathcal{W}\) and \({\text {dim}}(\mathcal{W}) = m_0+ m_1\) would necessarily imply that \(\mathcal{W}= \mathcal{W}^*\). Hence, for \(\left\| {P_{\mathcal{W}^*} -P_{\hat{\mathcal{W}}}}\right\| _F < 1\) and \(\epsilon >0\) sufficiently small, we have \(\mathcal{W}^* = \mathcal{W}\).
As already mentioned above, for \(\mu _X= {{\,\mathrm{Unif}\,}}(\mathbb {S}^{m_0-1})\) we have \(\left\| {X}\right\| _{\psi _2} = \frac{1}{\sqrt{m_0}}\). In this case, the error bound 16 behaves like
which is small for \(\epsilon >0\) small and \(m_0\gg m_1\). The latter condition seems favoring networks, for which the inner layer has a significantly larger number of neurons than the outer layer. This expectation is actually observed numerically, see Sect. 4. We have to add, though, that the parameter \(\alpha >0\) that intervenes in the error bound (16) might also depend on \(m_0, m_1\) (as it is in fact an estimate of an \((m_0+ m_1)^{th}\) singular value as in (15)). Hence, the dependency on the network dimensions is likely more complex and depends on the interplay between the input distribution \(\mu _X\) and the network architecture. In fact, at least judging from our numerical experiments, the error bound (16) is rather pessimistic, and it certainly describes a worst case analysis. One more reason might be that some crucial estimates in its proof could be significantly improved. Another reason could be the rather great generality of the activation functions of the networks, which we analyze in this paper, as described in Definition 3. Perhaps the specific instances used in the numerical experiments are enjoying better identification properties.
2.1 Estimating Hessians of the Network by Finite Differences
Before addressing the proof of Theorem 5, we give a precise definition of the finite differences we are using to approximate the Hessian matrices. Denote the i-th Euclidean canonical basis vector in \(\mathbb {R}^d\) by \(e_i\) and the second-order finite difference approximation of \(\nabla ^2 f(x)\) by
for \(i,j = 1,\dots , d=m_0\) and a step-size \(\epsilon >0\).
Lemma 7
Let \(f \in \mathcal{F}(m_0,m_0, m_1)\) be a neural network. Further assume that \(\Delta _\epsilon ^2 f(x)\) is constructed as in (17) for some \(\epsilon >0\). Then we have
where \(C_{\Delta }>0\) is a constant depending on the constants \(\kappa _j, \eta _j\) for \(j=0,\dots ,3\).
For the proof of Lemma 7, we simply use the Lipschitz continuity of the functions g, h and of their derivatives, and make use of \(\left\| {a}\right\| _2, \left\| {b}\right\| _2 \le 1\). The details can be found in the Appendix (Sect. 1).
2.2 Span of Tensors of (Entangled) Network Weights: Proof of Theorem 5
The proof can essentially be divided into two separate bounds. Both will be addressed separately with the two lemmas below. For both lemmas, we will assume that \(X_1, \dots , X_{m_X} \sim \mu _X\) independently and that \(\mathrm{supp\ }(\mu _X) \subseteq B^{m_0}_1\). Additionally, we define the random matrices
where \(P_{\mathcal{W}}\) denotes the orthogonal projection onto \(\mathcal{W}\) (cf. (14)). For reader’s convenience, we recall here from (10) that the Hessian matrix of \(f\in \mathcal{F}(m_0, m_0, m_1)\) can be expressed as
where \(\gamma _i(x), \tau _\ell (x), \) and \(v_\ell (x)\) are introduced in (11)–(13). We further simplify this expression by introducing the notations
which allow us to rewrite (10) in terms of matrix multiplications
Lemma 8
Let \(f \in \mathcal{F}(m_0, m_0, m_1)\) and let \(\hat{W}, W^*\) be defined as in (19)–(20), where \(\mu _X\) satisfies \(\mathrm{supp\ }(\mu _X) \subseteq B^{m_0}_1\) and has sub-Gaussian norm \(\left\| {X}\right\| _{\psi _2}\). Then the bound
holds with probability at least \( 1- 2 \exp \left( -c m_1m_X \right) \), where \(c > 0\) is an absolute constant and \(C, C_{\Delta }>0\) depend only on the constants \(\kappa _j, \eta _j\) for \(j=0,\dots ,3\).
Proof
By triangle inequality, we get
For the first term on the right-hand side, we can use the worst-case estimate from Lemma 7 to get
for some constant \(C_{\Delta }>0\). The second term in (21) can be bounded by (the explanation of the individual identities and estimates follows immediately below)
In the first two equalities, we made use of the fact that \(A\Gamma _x A^T \in \mathcal{W}\) and that by definition of an orthogonal projection \(\left\| {V_{X_i}T_{X_i}V_{X_i}^T - P_{\mathcal{W}}V_{X_i}T_{X_i}V_{X_i}^T}\right\| _F \le \left\| {V_{X_i}T_{X_i}V_{X_i}^T - V_0 T_{X_i} V_0^T }\right\| _F\). The remaining inequalities follow directly from the submultiplicativity of \(\left\| {\cdot }\right\| _F\) and \(\left\| {\cdot }\right\| \) combined with the Lipschitz continuity of the activation functions and their derivatives (cf. 3 in Definition 3). Since \(\left\| {a_j}\right\| \le 1\), we can estimate the sub-exponential norm of \(\left\| {A^T X_i}\right\| ^2_\infty = \max _{1 \le j \le m_0} \langle X_i, a_j \rangle ^2\) by
for an absolute constant \(c_1>0\), where we applied [64, Lemma 2.2.2] in the first inequality and used that \(\left\| {Y}\right\| _{\psi _2}^2 = \left\| {Y^2}\right\| _{\psi _1}\) for any scalar random variable Y together with the fact that the sub-Gaussian norm of a vector is defined by \(\left\| {X}\right\| _{\psi _2} = \sup _{x\in \mathbb {S}^{m_0-1}} |\langle x, X\rangle |\) (cf. [65]). The random vectors \(X_i\sim \mu _X\) are i.i.d., which allows us to drop the dependency on i in the last step. The previous bound also guarantees a bound on the expectation, which is due to \(\mathbb {E}[|Y|^p] \le p! \left\| {Y}\right\| _{\psi _1}\) (cf. [64]), namely, for \(p=1\) and \(Y=\max _{1 \le j \le m_0} \langle X, a_j \rangle ^2\)
Denote \(Z_i := \left\| {A^T X_i}\right\| ^2_\infty \) for all \(i = 1, \dots , m_X\), then
Therefore, applying the Bernstein inequality for sub-exponential random variables [65, Theorem 2.8.1] to the right sum in (24) yields
with probability at least
for all \(t \ge 0\) and an absolute constant \(c >0\). Then, by choosing \(t = c_1 m_X m_1\log (m_0+1) \left\| {X}\right\| _{\psi _2}^2\), we get
with probability at least
From (21), combining (22) and (25) yields
where we used Lemma 7 in the second inequality, and the result holds at least with the probability given as in (26). Setting \(C:=\sqrt{8c_1} \kappa _1 \kappa _2 \eta _2 >0\) finishes the proof.
Lemma 9
Let \(\mu _X\) be centered with \(\mathrm{supp\ }(\mu _X) \subseteq B^{m_0}_1\). Furthermore, assume that \(f \in \mathcal{F}(m_0, m_0, m_1)\) and that \(\hat{W}\) is given by (19) with step-size \(\epsilon > 0\). If
then we have
with probability at least \(1 - (m_0+ m_1)\exp \left( -\frac{m_X \alpha }{8 C_1 \left\| {A}\right\| ^4\left\| {B}\right\| ^4 m_1}\right) \), where \(C_\Delta , C_1 > 0\) depend only on the constants \(\kappa _j, \eta _j\) for \(j=0,\dots ,3\).
Proof
By Weyl’s inequality and re-using (22), we obtain
For the first term of the right-hand side, we have \(\sigma _{m_0+ m_1}(W)^2 = \sigma _{m_0+ m_1}(W W^T)\), which can be written as a sum of the outer products of the columns
Additionally, the matrices \({\text {vec}}(\nabla ^2 f(X_i)) \otimes {\text {vec}}(\nabla ^2 f(X_i))\) are independent and positive definite random matrices. The Chernoff bound for the eigenvalues for sums of random matrices, due to Gittens and Tropp [25], applied to the right-hand side of the last equation yields the following lower bound:
with probability at least
where we set \(K = \max _{x \in B^{m_0}_1} \left\| {{\text {vec}}(\nabla ^2 f(x)) \otimes {\text {vec}}(\nabla ^2 f(x))}\right\| \). To estimate K more explicitly, we first have to bound the norm of the Hessian matrices. Let \(X \sim \mu _X\), then
for some constant \(C_1 > 0\). Now we can further estimate K by
Finally, we can finish the proof by plugging the above into (27) and by setting \(t = \frac{1}{2}\).
Proof of Theorem 5
The proof is a combination of the previous lemmas together with an application of Wedin’s bound [59, 66]. Given \(\hat{W},W^*\), let \(\hat{U} \hat{\Sigma }\hat{V}^T, U^* \Sigma ^* {V^*}^T\) be their respective singular value decompositions. Furthermore, denote by \(\hat{U}_1, U_1^*\) the matrices formed by only the first \(m_0+m_1\) columns of \(\hat{U}, U^*\), respectively. According to this notation, Algorithm 2 returns the orthogonal projection \(P_{\hat{\mathcal{W}}}= \hat{U}_1 \hat{U}_1^T.\) We also denote by \(P_{\mathcal{W}^*}\) the projection given by \(P_{\mathcal{W}^*} = U_1^* {U_1^*}^T.\) Then we can bound the difference of the projections by applying Wedin’s bound
as soon as \(\bar{\alpha }>0\) satisfies
Since \( \mathcal{W}\) has dimension \(m_0+ m_1\), we have \(\max _{ k \ge m_0+ m_1+1} \sigma _k(W^*) = 0\). Therefore the second inequality is equivalent to the first, and we can choose \(\bar{\alpha } = \sigma _{m_0+ m_1}(\hat{W}) \le \min _{1\le j \le m_0+ m_1} \sigma _{j}(\hat{W})\). Thus, we end up with the inequality
Applying the union bound for the two events in Lemma 8 and Lemma 9 in combination with the respective inequalities yields
with probability at least \(1 - 2e^{-c m_1m_X} - (m_0+ m_1)e^{ -\frac{\alpha }{8C_1 \left\| {A}\right\| ^4\left\| {B}\right\| ^4 m_1} m_X }\), where \(C,C_1,C_{\Delta }, c > 0\) are the constants from the lemmas above.
3 Recovery of Individual (Entangled) Neural Network Weights
The symmetric rank-1 matrices \(\{a_i \otimes a_i: i \in [m_0] \}\cup \{ v_\ell \otimes v_\ell : \ell \in [m_1] \}\) made of tensors of (entangled) neural network weights are the spanning elements of \(\mathcal{W}\), which in turn can be approximated by \(\hat{\mathcal{W}}\) as has been proved above. In this section, we explain under which conditions it is possible to stably identify approximations to the network profiles \(\{a_i: i \in [m_0] \}\cup \{ v_\ell : \ell \in [m_1] \}\) by a suitable selection process, Algorithm 3.
To simplify notation, we drop the differentation between weights \(a_i\) and \(v_\ell \) and simply denote \(\mathcal{W}= {\text {span}}\left\{ w_1 \otimes w_1,\ldots , w_{m} \otimes w_{m}\right\} \), where \(m = m_0+ m_1\), and every \(w_\ell \) equals either one of the \(a_i\)’s or one of the \(v_\ell \)’s. Thus, m may be larger than d. We also use the notations \(W_j := w_j \otimes w_j\), and \(\hat{W}_j:= P_{\hat{\mathcal{W}}}(W_j)\). Provided that the approximation error \(\delta := \left\| {P_{\mathcal{W}} - P_{\hat{\mathcal{W}}}}\right\| _F\) satisfies \(\delta < 1\) (cf. Theorem 5), \(\{\hat{W}_j: j \in [m]\}\) is the image of a basis under a bijective map and thus can be used as a basis for \(\hat{\mathcal{W}}\) (see Lemma 32 in the Appendix). We quantify the deviation from orthonormality by \(\nu := C_F- 1\), see (9). As an example of suitable frames, normalized tight frames achieve the bounds \(c_f =C_F= m/d\) [4, Theorem 3.1], see also [11]. For instance, for such frames \(m=\lceil 1.2 d \rceil >d \) would allow for \(\nu =0.2\). These finite frames are related to the Thomson problem of spherical equidistribution, which involves finding the optimal way in which to place m points on the sphere \(\mathbb S^{d-1}\) in \(\mathbb {R}^d\) so that the points are as far away from each other as possible. We further note that if \(0<\nu < 1\) then \(\{W_j: j \in [m]\}\) is a system of linearly independent matrices, and therefore a Riesz basis (see Lemma 29 and (47) in the Appendix). We denote the corresponding lower and upper Riesz constants by \(c_r,C_R\).
Finally, for any real, symmetric matrix X, we let \(X = \sum _{j=1}^{d}\lambda _j(X)u_j(X)\otimes u_j(X)\) be the spectral decomposition ordered according to \(\Vert X\Vert =\lambda _1(X) \ge \ldots \ge \lambda _d(X)\) (in case \(\lambda _1(X)= - \Vert X\Vert \), we actually consider \(-X\) instead of X). In the following, we are able to provide in Theorem 11 general recovery guarantees of network weights provided by the eigenvector associated to the largest eigenvalue in absolute value of any suitable matrix \(M \in \hat{\mathcal{W}}\cap \mathbb S\).
Remark 10
The problem considered in this section is how to approximate the individual \(w_\ell \otimes w_\ell \) within the space \(\mathcal{W}\) or more precisely by using its approximation \(\hat{\mathcal{W}}\). As the analysis below is completely unaware of how the space \(\hat{\mathcal{W}}\) has been constructed, in particular it does not rely on the fact that it comes from second-order differentiation of a two hidden layer network, here we are actually implicitly able of addressing also the problem of the identification of weights for one hidden layer networks (2) with a number m of neurons larger than the input dimension d, which was left as an open problem from [23].
3.1 Recovery Guarantees
The network profiles \(\{w_j, j \in [m] \}\) are (up to sign) uniquely defined by matrices \(\{W_j: j \in [m] \}\) as they are precisely the eigenvectors corresponding to the unique nonzero eigenvalue. Therefore, it suffices to recover \(\{W_j: j \in [m] \}\), and we have to study when such matrices can be uniquely characterized within the matrix space \(\mathcal{W}\) by their rank-1 property. Let us stress that this problem is strongly related to similar and very relevant ones appearing recently in the literature addressing nonconvex programs to identify sparse vectors and low-rank matrices in linear subspaces, see, e.g., in [45, 49]. In Appendix 2 (Lemma 30 and Corollary 31), we prove that unique identification is possible if any subset of \(\lceil m/2\rceil + 1\) vectors of \(\{w_j: j \in [m] \}\) is linearly independent and that such subset linear independence is actually implied by the frame bounds (9) if \(\nu =C_F-1 < \lceil \frac{m}{2}\rceil ^{-1}\). Unfortunately, this assumption seems a bit too restrictive in our scenario; hence, we instead resort to a weaker and robust version given by the following result. In particular, we prove that any near rank-1 matrix in \(\hat{\mathcal{W}}\) of unit Frobenius norm is not too far from one of the \(W_j\)’s, provided that \(\delta \) and \(\nu \) are small.
Theorem 11
Let \(M \in \hat{\mathcal{W}}\cap \mathbb {S}\) and assume \(\max \{\delta ,\nu \}\le 1/4\). If \(\lambda _1(M) > \max \{2\delta , \lambda _2(M)\}\) then
Before proving Theorem 11, we need the following technical result.
Lemma 12
For any \(M = \sum _{j=1}^{m}\sigma _j \hat{W}_j\in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(\lambda _1(M) \ge \delta /(1-\delta )\) we have \(\max _{i}\sigma _i \ge 0\).
Proof
Assume, to the contrary, that \(\max _{j}\sigma _j < 0\), and denote \(Z = \sum _{j=1}^{m} \sigma _j W_j\) with \(M = P_{\hat{\mathcal{W}}}(Z)\). Z is negative definite, since \(v^TZv = \sum _{j=1}^{m}\sigma _j \left\langle w_j, v\right\rangle ^2\), and \(\sigma _j < 0\) for all \(i=1,\ldots ,m\). Moreover, we have \(\left\| {Z}\right\| _F \le (1-\delta )^{-1}\) by Lemma 32, and thus, we get a contradiction by
Proof of Theorem 11
Let \(\lambda _1 := \lambda _1(M)\), \(u_1 := u_1(M)\) for short in this proof. We can represent M in terms of the basis elements of \(\hat{\mathcal{W}}\) as \(M = \sum _{j=1}^{m}\sigma _j \hat{W}_j\), and let \(Z \in \mathcal{W}\) satisfy \(M = P_{\hat{\mathcal{W}}}(Z)\). Furthermore, let \(\sigma _{j^*} = \max _j \sigma _j \ge 0\) where the nonnegativity follows from Lemma 12. Using \(Z = \sum _{j=1}^{m}\sigma _j w_j\otimes w_j\) and \(\left\| {Z}\right\| _F \le (1-\delta )^{-1}\), we first notice that
and
where we used \(\left\| {\sigma }\right\| _{\infty }\le (1-\delta )^{-1}(1-\nu )^{-1} \le 2\) according to Lemma 33. Hence \(\left| {\lambda _1 - \sigma _{j^*}}\right| \le 2\delta + 2\nu \). Define now \(Q := \mathsf {Id}- u_1 \otimes u_1\). Choosing \(s \in \{-1, 1\}\) so that \(s\left\langle w_{j^*},u_1\right\rangle \ge 0\), we can bound the left hand side in (28) by
Viewing \(W_{j^*} = w_{j^*}\otimes w_{j^*}\) as the orthogonal projection onto the eigenspace of the matrix \(\lambda _1 W_{j^*}\), corresponding to eigenvalues in \([\infty , \lambda _1]\), we can use Davis-Kahans Theorem in the version of [6, Theorem 7.3.1] to further obtain
To bound the numerator, we first use \( \left\| {Z - M}\right\| _F \le \delta /(1-\delta )\) in the decomposition
and then bound the first term using \(\left| {\lambda _1 - \sigma _{j^*}}\right| \le 2\delta + 2\nu \) and the frame property (9) by
Combining these estimates with (30) and \(\delta /(1-\delta )\le 2\delta \), we obtain
The result follows since \(\{w_j \otimes w_j: j \in [m]\}\) is a Riesz basis and thus \(\left\| {\sigma }\right\| _2 \le c_r^{-1/2}\left\| {Z}\right\| _F \le 2c_r^{-1/2}\).
The preceding result provides recovery guarantees for network weights provided by the eigenvector associated to the largest eigenvalue in absolute value of any suitable matrix \(M \in \hat{\mathcal{W}}\cap \mathbb S\). The estimate is inversely proportional to the spectral gap \(\lambda _1(M) - \lambda _2(M)\). The problem then becomes the constructive identification of matrices M belonging to \(\hat{\mathcal{W}}\cap \mathbb S\), which simultaneously maximize the spectral gap. Inspired by the results in [23], we propose to consider the following nonconvex program as selector of such matrices
By maximizing the spectral norm under a Frobenius norm constraint, a local maximizer of the program should be as nearly rank one as possible within a given neighborhood. Moreover, if rank one matrices exist in \(\hat{\mathcal{W}}\), these are precisely the global optimizers.
3.2 A Nonlinear Program: Properties of Local Maximizers of (31)
In this section, we prove that, except for spurious cases, local maximizers of (31) are generically almost rank-1 matrices in \(\hat{\mathcal{W}}\). In particular, we show that local maximizers either satisfy \(\left\| {M}\right\| ^2 \ge 1 - c\delta - c'\nu \), for some small constants \(c, c'\), implying near minimal rankness, or \(\left\| {M}\right\| ^2 \le c\delta + c' \nu \), i.e., all eigenvalues of M are small, the mentioned spurious cases. Before addressing these estimates, we provide a characterization of the first- and second-order optimality conditions for (31), see [23] and also [61, 62].
Theorem 13
(Theorem 3.4 in [23]) Let \(M \in \hat{\mathcal{W}}\cap \mathbb {S}\) and assume there exists a unique \(i^* \in [d]\) satisfying \(\left| {\lambda _{i^*}(M)}\right| = \left\| {M}\right\| \). If M is a local maximizer (31); then, it fulfills the stationary or first order optimality condition
for all \(X \in \hat{\mathcal{W}}\). A stationary point M (in the sense that M fulfills (32)) is a local maximizer of (31) if and only if for all \(X \in \hat{\mathcal{W}}\)
Proof
The statement requires minor modification of [23, Theorem 3.4] and the proof follows along analogous lines. For reader’s convenience, we give self-contained proof of the statement below, with some key computations borrowed from [23].
For simplicity, we drop the argument M in \(\lambda _{i}\), \(u_i\), and without loss of generality we assume \(\lambda _{i^*} = \left\| {M}\right\| \), otherwise we consider \(-M\). Following the analysis in [23], for \(X \in \hat{\mathcal{W}}\cap \mathbb {S}\) we can consider the function
because M is a local maximizer if and only if \(\alpha = 0\) is a local maximizer of \(f_{X}\) for all \(X \in \hat{\mathcal{W}}\cap \mathbb {S}\).
Let us consider \(X \in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(X\perp M\) first. We note that the simplicity of \(\lambda _{i^*}\) implies that there exist analytic functions \(\lambda _{i^*}(\alpha )\) and \(u_{i^*}(\alpha )\) with \((M+\alpha X)u_{i^*}(\alpha ) = \lambda _{i^*}(\alpha ) u_{i^*}(\alpha )\) for all \(\alpha \) in a neighborhood around 0 [40, 50]. Therefore we can use a Taylor expansion \(\left\| {M+\alpha X}\right\| = \lambda _{i^*} + \lambda '_{i^*}(0)\alpha + \lambda ''_{i^*}(0)\alpha ^2/2 + \mathcal{O}(\alpha ^3)\) and combine it with \(\left\| {M+\alpha X}\right\| _F = \sqrt{1 + \alpha ^2} = 1 - \alpha ^2/2 + \mathcal{O}(\alpha ^4)\) to get
Differentiating once, we get \(f_X'(0) = \lambda '_{i^*}(0)\); hence, \(\alpha = 0\) is a stationary point if and only if \(\lambda '_{i^*}(0)\) vanishes. Following the computations in [23], we find that \(\lambda '_{i^*}(0) = u_{i^*}(0)^T X u_{i^*}(0) = 0\), and thus, (32) follows for any \(X \perp M\). For general X, we split \(X = \left\langle X, M\right\rangle M + X_{\perp }\), and get \(u_{i^*}(0)^T X u_{i^*}(0) = \left\langle X, M\right\rangle u_{i^*}(0)^T M u_{i^*}(0) = \lambda _{i^*}(0)\left\langle X, M\right\rangle \).
For (33), we have to check additionally \(f''_X(\alpha ) \le 0\). The second derivative of \(f_X(\alpha )\) at zero is given by \(f_X''(0) = \lambda ''_{i^*}(0) - \lambda _{i^*}(0)\); hence, the condition for attaining a local maximum is \(\lambda ''_{i^*}(0) \le \lambda _{i^*}(0)\). Again, we can follow the computations in [23] to obtain
and (33) follows immediately for any \(X \perp M\), \(\left\| {X}\right\| _F = 1\). For general X, we decompose it into \(X = \left\langle X, M\right\rangle M + X_{\perp }\). Since \(u_{i^*}^T(0) M u_{k}(0) = 0\) for all \(k\ne i^*\), we get
and the result follows from \(\left\| {X_{\perp }}\right\| _F = \left\| {X - \left\langle X, M\right\rangle M}\right\| _F\).
For simplicity, we denote \(u_i := u_i(M)\) and \(\lambda _i = \lambda _i(M)\) throughout the rest of this section. Moreover, we assume M satisfies
-
(A1)
\(\lambda _1 = \left\| {M}\right\| \) (this is without loss of generality because \(-M\) and M may be both local maximizers),
-
(A2)
\(\lambda _1 > \lambda _2\). (This is a useful technical condition in order to use the second-order optimality condition (33).)
To derive the bounds for \(\lambda _1\), we establish an inequality \(0\le \lambda _1^2(\lambda _1^2 - 1) + c\delta + c'\nu \), which implies that \(\lambda _1^2(M)\) is either close to 0 or close to 1. A first ingredient for obtaining the inequality is
where we used \(\left| {\Vert \hat{W}_ju_1\Vert ^2 - u_1^T \hat{W}_j u_1}\right| \le 2\delta \) in the inequality, see Lemma 33 in Appendix 2, and (32) in the equality. The other useful technical estimate is provided in the following Lemma, which is proven by leveraging the second order optimality condition (33).
Lemma 14
Assume that M is a local maximizer satisfying (A1) and (A2) and let \(\max \{\delta , \nu \} < 1/4\). For any \(X \in \hat{\mathcal{W}}\) with \(\left\| {X}\right\| _F \le 1\) we have
For the proof of Lemma 14, we need a lower bound for the smallest eigenvalue (see Appendix 2 for the proof of Lemma 15).
Lemma 15
Assume that M is a stationary point of (31) satisfying (A1) and (A2). If \(\max \{\delta , \nu \} < 1/4\), then \(\lambda _D \ge -2\delta \lambda _1^{-1} - 8\delta - 4\nu \).
Proof of Lemma 14
We first use (32) and (33) to get
and then rearrange the inequality to obtain
Using the lower bound for \(\lambda _D\) from Lemma 15, and \(\lambda _1\le 1\), we get
By combining (34) and (35), the bounds for \(\lambda _1\) follow.
Theorem 16
Assume that M is a local maximizer of (31) satisfying 3.2 and 3.2, and assume \(38\delta + 13\nu < 1/4\). Then we have \(\lambda _1^2 \ge 1- 38\delta - 13\nu \) or \(\lambda _1^2 \le 38\delta + 13\nu .\)
Proof
Let \(j^* = \arg \max _j \sigma _j\). We first note that we can assume \(\sigma _{j^*} \ge 0\) without loss of generality by Lemma 12, since there is nothing to show if \(\lambda _1\le 2\delta \). Now we consider (34) and (35) for \(X = \hat{W}_{j^*}\) to get the inequality
We separate two cases. In the first case, we have \(\sigma _{j^*} > 1\), which implies \(\langle \hat{W}_j, M\rangle > 1 - 5\delta - 2\nu \) and thus \(\langle W_j, M\rangle > 1 - 6\delta - 2\nu \) by Lemma 33 and \(\max \{\delta ,\nu \} < 1/4\). Since \(\langle W_j, M\rangle = w_j^T M w_j\), this implies \(\lambda _1 > 1 - 6\delta - 2\nu \), i.e., the result is proven. We continue with the case \(\sigma _{j^*} \le 1\), which implies \(\lambda _1 \sigma _{j^*} \Vert \hat{W}_j\Vert _F^2 \le 1\). Using Lemma 33 to bound \(\sigma _{j^*} \Vert \hat{W}_j\Vert _F^2 - \langle \hat{W}_j, M\rangle \), \(\lambda _1 < 1\) and \(\Vert \hat{W}_j\Vert _F^2 \ge 1 - 2\delta \), the last inequality in (36) implies
Furthermore, by following the computation we performed for (29), we get \(\sigma _{j^*} \ge \lambda _1 - \nu - 2\delta \), and inserting it in (37) we obtain
Provided that \(38\delta + 13\nu < 1/4\), this quadratic inequality (in the unknown \(\lambda _1^2\)) has solutions \(\lambda _1^2 \ge 1 - 38\delta - 13\nu \), or \(\lambda _1^2 \le 38\delta + 13\nu \).
In Sect. 3.2, we analyze local maximizers of (31) and show that there exist small constants \(c,c'\) such that either \(\left\| {M}\right\| ^2 \ge 1 - c\delta + c'\nu \), or \(\left\| {M}\right\| ^2 \le c\delta + c'\nu \). Therefore, a local maximizer of (31) is either almost rank-1 or it has its energy distributed across many eigenvalues. This criterion can be easily checked in practice, and therefore maximizing (31) is a suitable approach for finding near rank-1 matrices in \(\hat{\mathcal{W}}\). In this section, we show how those individual symmetric rank-1 tensors can be approximated by a simple iterative algorithm, Algorithm 3, making exclusive use of the projection \(P_{\hat{\mathcal{W}}}\). Algorithm 3 strives to solve the nonconvex program (31), by iteratively increasing the spectral norm of its iterations. Our approach is closely related to the projected gradient ascent iteration [23, Algorithm 4.1], but we introduce some modifications, in particular we exchange the order of the normalization and the projection onto \(\hat{\mathcal{W}}\). The proof of convergence of [23, Algorithm 4.1] takes advantage of that different ordering of these operations to address the case where \(\mathcal{W}\) is spanned by at most \(m \le d\) rank-1 matrices formed as tensors of nearly orthonormal vectors (after whitening). In fact, its analysis is heavily based on approximated singular value or spectral decompositions. Unfortunately in our case, the decomposition \({M=\sum _{j=1}^m \sigma _j w_j \otimes w_j}\) does not approximate the singular value or spectral decomposition since the \(w_j\)’s are redundant (they form a frame) and therefore are not properly nearly orthonormal in the sense required in [23].
Algorithm 3 is based on the iterative application of the operator \(F_{\gamma }\) defined by
with \(\gamma >0\) and \(P_{\mathbb {S}}\) as the projection onto the sphere \(\mathbb {S}= \{X : \left\| {X}\right\| _F = 1\}\). The following Lemma shows that, if \(\lambda _1(X) > 0\), the operator \(F_{\gamma }\) is well-defined, in the sense that it is a single-valued operator.
![figure c](http://media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs00365-021-09550-5/MediaObjects/365_2021_9550_Figc_HTML.png)
Lemma 17
Let \(X\in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(\lambda _1(X) > 0\) and \(\gamma >0\). Then \(\Vert P_{\hat{\mathcal{W}}}(X + \gamma u_1(X) \otimes u_1(X))\Vert _F^2 = 1 + 2\gamma \lambda _1(X)+ \gamma ^2 \left\| {P_{\hat{\mathcal{W}}}(u_1(X)\otimes u_1(X))}\right\| _F^2\). In particular, \(F_{\gamma }(X)\) is well-defined and can be explicitly expressed as
Proof
The result follows from \(\left\langle X, P_{\hat{\mathcal{W}}}(u_1(X)\otimes u_1(X))\right\rangle = \lambda _1(X)\) and computing explicitly the squared norm \(\left\| {P_{\hat{\mathcal{W}}}(X + \gamma u_1(X) \otimes u_1(X))}\right\| _F^2\).
We analyze next the sequence \((M_j)_{j \in \mathbb {N}}\) generated by Algorithm 3. We show that \((\lambda _1(M_j))_{j \in \mathbb {N}}\) is a strictly monotone increasing sequence, converging to a well-defined limit \(\lambda _{\infty }=\lim _{j\rightarrow \infty }\lambda _1(M_j)\), and, if \(\lambda _1(M_j) > 1/\sqrt{2}\) for some j, all convergent subsequences of \((M_j)_{j \in \mathbb {N}}\) converge to fixed points of \(F_{\gamma }\). Moreover, we prove that such fixed points satisfy (32) and are thus stationary points of (31). We begin by providing two equivalent characterizations of (32).
Lemma 18
For \(M \in \hat{\mathcal{W}}\) and \(c \ne 0\), we have
Proof
Assume that \(v^T X v = c \left\langle X,M\right\rangle \) for all X. We notice that the assumption is equivalent to \(\left\langle X, v\otimes v - c M\right\rangle = 0\) for all \(X\in \hat{\mathcal{W}}\). Therefore \(P_{\hat{\mathcal{W}}}(v\otimes v - c M) = 0\), and the result follows from \(M \in \hat{\mathcal{W}}\). In the case where \(M = c^{-1} P_{\hat{\mathcal{W}}}(v \otimes v)\), we compute \(c\left\langle X, M\right\rangle = \left\langle X, P_{\hat{\mathcal{W}}}(v \otimes v)\right\rangle = v^T X v\) since \(X\in \hat{\mathcal{W}}\).
Lemma 19
Let \(X \in \hat{\mathcal{W}}\cap \mathbb {S}\). We have \(\left\| {P_{\hat{\mathcal{W}}}(u_j(X)\otimes u_j(X))}\right\| _F \ge \left| {\lambda _j(X)}\right| \) with equality if and only if \(X = \lambda _j(X)^{-1}P_{\hat{\mathcal{W}}}(u_j(X)\otimes u_j(X))\).
Proof
We drop the argument X for \(\lambda _j(X)\) and \(u_j(X)\) for simplicity. We first calculate
Moreover, we have equality if and only if \(\left\| {P_{\hat{\mathcal{W}}}(u_j \otimes u_j)}\right\| _F = \left| {\lambda _j}\right| \), hence (38) is actually a chain of equalities. Specifically,
which implies \(X = c P_{\hat{\mathcal{W}}}(u_j \otimes u_j)\) for some scalar c. Since \(\left\| {X}\right\| _F = 1\), \(c= \lambda _j^{-1}\) follows from
Lemmas 18 and 19 show that the stationary point condition (32) for M with \(\left\| {M}\right\| = \left| {\lambda _{i^*}(M)}\right| \) and isolated \(\lambda _{i^*}\) is equivalent to both
A similar condition appears naturally if we characterize the fixed points of \(F_{\gamma }\).
Lemma 20
Let \(\gamma > 0\) and \(X \in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(\lambda _1(X) > 0\). Then we have
Proof
For simplicity, we denote \(u:=u_1(X)\) and \(\lambda = \lambda _1(X)\) in this proof. We first prove that \(0< \lambda < \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F\) implies \( \lambda _1(F(X)) > \lambda \). It suffices to show that there exists any unit vector v such that \(v^T F_{\gamma }(X) v> \lambda \). In particular, we can test \(F_{\gamma }(X)\) with \(v=u\), which yields the identity
By using now \(\lambda < \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F\), we can bound
Inserting this inequality in the previous identity, we obtain the wished result by
We show now that \(F_{\gamma }(X) = X\) implies \(\lambda = \left\| {P_{\hat{\mathcal{W}}}(u_1(X) \otimes u_1(X))}\right\| _F\). We notice that \(F_{\gamma }(X) = X\) implies \(\lambda (F_{\gamma }(X)) = \lambda \), and thus \(\lambda \ge \Vert P_{\hat{\mathcal{W}}}(u \otimes u)\Vert _F\) according to (39). Since generally \(\lambda \le \Vert P_{\hat{\mathcal{W}}}(u \otimes u)\Vert _F\) by Lemma 19, equality follows.
We address now the converse, i.e., \(\lambda = \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F\) implies \(F_{\gamma }(X) = X\), and we note that \(\lambda = \Vert P_{\hat{\mathcal{W}}}(u \otimes u)\Vert _F\) implies \(X = \lambda ^{-1}P_{\hat{\mathcal{W}}}(u \otimes u)\) by Lemma 19. Using this and the definition of \(F_{\gamma }(X)\), we get
To conclude the proof it remains to show \(\lambda _1(F(X)) > \lambda \) implies \(0< \lambda < \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F\). As \(\lambda \le \Vert P_{\hat{\mathcal{W}}}(u \otimes u)\Vert _F\) and \(\lambda _1(F(X)) > \lambda \) implies \(F_{\gamma }(X) \ne X\) and therefore \(\lambda \ne \Vert P_{\hat{\mathcal{W}}}(u \otimes u)\Vert _F\), then necessarily \(\lambda <\Vert P_{\hat{\mathcal{W}}}(u \otimes u)\Vert _F\).
The preceding Lemma implies the convergence of \((\lambda _1(M_j))_{j \in \mathbb {N}}\) by monotonicity. Moreover, we can also use such convergence to establish \(\Vert M_{j+1}-M_{j} \Vert _F \rightarrow 0\).
Lemma 21
Let \(\gamma > 0\), \(M_0 \in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(\lambda _1(M_0) > 0\), and let \(M_j := F_{\gamma }(M_{j-1})\). The sequence \((\lambda _1(M_j))_{j \in \mathbb {N}}\) converges to a well-defined limit \(\lambda _{\infty }\), and \(\lim _{j\rightarrow \infty } \Vert M_{j+1}-M_{j} \Vert _F= 0\).
Proof
Denote \(U_j := P_{\hat{\mathcal{W}}}(u(M_j) \otimes u(M_j))\), \(\lambda _j = \lambda (M_j)\) for simplicity. The sequence \((\lambda _j)_{j\in \mathbb {N}}\) is monotone in the bounded domain [0, 1] by Lemma 20 and therefore converges to a limit \(\lambda _{\infty }\). To prove \(\Vert M_{j+1}-M_{j} \Vert _F\rightarrow 0\), we will exploit \((\lambda _{j+1} - \lambda _{j})\rightarrow 0\). We first have \((\left\| {U_j}\right\| _F - \lambda _j) \rightarrow 0\) since (41) yields
and \(\left\| {U_j}\right\| _F \ge \lambda _j \ge \lambda _0\) for all j. Define the shorthand \(\Delta _j := \left\| {U_j}\right\| _F - \lambda _j\). We will now show that \(\left\| {M_{j+1} - M_j}\right\| _F \le C\Delta _j\) for some constant C. First notice that
Therefore there exists a matrix \(E_j\) with \(M_j = \lambda _j^{-1}U_j + E_j\) and \(\left\| {E_j}\right\| \le \lambda _0^{-1}\sqrt{2\Delta _j}\). Furthermore, by the triangle inequality we have
hence it remains to bound the first term. Using \(M_j = \lambda _j^{-1}U_j + E_j\) and \(M_{j+1} = \left\| {M_j + \gamma U_j}\right\| _F^{-1} (M_j + \gamma U_j)\), we have \(\left\| {M_j + \gamma U_j}\right\| _FM_{j+1} = (\lambda _j^{-1} + \gamma )U_j + E_j\) and thus
Since \(\left\| {M_j + \gamma U_j}\right\| _F \ge 1\) according to Lemma 17, \(\left\| {M_{j+1} - M_j}\right\| \rightarrow 0\) follows.
It remains to show that convergent subsequences of \((M_j)_{j \in \mathbb {N}}\) converge to fixed points of \(F_{\gamma }\). Then by (40), Lemmas 18 and 19, fixed points satisfy the first-order optimality condition (32) and are stationary points of (31). To prove convergence of subsequences to fixed points, we require continuity of \(F_{\gamma }\). The following Lemma shows that \(F_{\gamma }\) is continuous for matrices X satisfying \(\lambda _1(X) > 1/\sqrt{2}\), i.e., if the largest eigenvector is isolated and \(u_1(X)\) is a continuous function of X.
Lemma 22
Let \(\gamma > 0\), \(\epsilon > 0\) arbitrary, and define \(\mathcal{M}_{\epsilon }:= \{M \in \hat{\mathcal{W}}\cap \mathbb {S}: \lambda (M) \ge (\frac{1}{2} + \epsilon )^{-1/2}\}\). Then \(F_{\gamma }(X) \in \mathcal{M}_{\epsilon }\) for all \(X \in \mathcal{M}_{\epsilon }\), and \(F_{\gamma }\) is \(\left\| {\cdot }\right\| _F\)-Lipschitz continuous, with Lipschitz constant \((1 + \gamma /\epsilon )\).
Proof
\(F_{\gamma }(X) \in \mathcal{M}_{\epsilon }\) follows directly from Lemma 20, i.e., from the fact that the largest eigenvalue is only increased by applying \(F_{\gamma }\). For the continuity, consider \(X,Y \in \mathcal{M}_{\epsilon }\). We first note that by using [6, Theorem 7.3.1] and \(\lambda _i(Y) \le \sqrt{1/2 - \varepsilon }\) for \(i=2,\ldots ,m_0\) we get
Furthermore, we have \(\left\| {X + \gamma P_{\hat{\mathcal{W}}}(u_1(X)\otimes u_1(X))}\right\| _F^2 \ge 1\) according to Lemma 17, and therefore, \(P_{\mathbb {S}}\) acts on \(X + \gamma P_{\hat{\mathcal{W}}}(u_1(X)\otimes u_1(X))\) and \(Y + \gamma P_{\hat{\mathcal{W}}}(u_1(Y)\otimes u_1(Y))\) as a projection onto the convex set \(\{X : \left\| {X}\right\| _F \le 1\}\). Therefore it acts as a contraction and the result follows from
The convergence to fixed points of any subsequence of \((M_j)_{j \in \mathbb {N}}\) now follows as a corollary of Lemma 34 in the Appendix.
Theorem 23
Let \(\epsilon > 0\), \(\gamma > 0\), \(M_0 \in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(\lambda (M_0) \ge 1/\sqrt{2} + \varepsilon \) and let \(M_{j+1} := F_{\gamma }(M_{j})\) as generated by Algorithm 3. Then \((M_{j+1})_{j \in \mathbb {N}}\) has a convergent subsequence, and any such subsequence converges to a fixed point of \(F_{\gamma }\), respectively, a stationary point of (31).
Proof
By Lemma 22, the operator \(F_{\gamma }\) is continuous on \(\mathcal{M}_{\epsilon }:= \{M \in \hat{\mathcal{W}}\cap \mathbb {S}: \lambda _1(M) \ge (\frac{1}{2} + \epsilon )^{-1/2}\}\) for any \(\epsilon > 0\). Moreover, by Lemma 20 we have \((M_{j+1})_{j \in \mathbb {N}} \subset \mathcal{M}_{\epsilon }\), and by Lemma 21 we have \(\left\| {M_{j+1} - M_j}\right\| _F \rightarrow 0\). Therefore we can apply Lemma 34 to see that any convergent subsequence converges to a fixed point of \(F_{\gamma }\). Moreover, since \((M_{j+1})_{j \in \mathbb {N}}\) is bounded, there exists at least one convergent subsequence by Bolzano–Weierstrass. Finally, any fixed point \(\bar{M}\) of \(F_\gamma \) can be written as \(\bar{M} = \lambda _1(\bar{M})P_{\hat{\mathcal{W}}}(u_1(\bar{M})\otimes u_1(\bar{M}))\) by Lemma 19 and Lemma 20. Since \(\lambda _1(\bar{M}) > 1/\sqrt{2}\), it is an isolated eigenvalue satisfying \(\lambda _1(\bar{M}) = \left\| {\bar{M}}\right\| \), and thus, \(\bar{M}\) satisfies the first order optimality condition (32) of (31) by Theorem 13.
Remark 24
The analysis of the convergence of Algorithm 3 we provide above does not use the structure of the space \(\mathcal{W}\) and it focuses exclusively on the behavior of the first eigenvalue \(\lambda _1\). As a consequence, it does guarantee that its iterations have monotonically increasing spectral norm and that they generically converges to stationary points of (31). However, it does not ensure convergence to nonspurious, minimal rank local minimizers of (31). In the numerical experiments of Sect. 4, where \(\{w_j:j\in [m]\}\) are sampled randomly from certain distributions, an overwhelming majority of sequences \((M_{j})_{j\in \mathbb {N}}\) converges to a near rank-1 matrix with an eigenvalue close to one, whose corresponding eigenvector approximates a network profile with good accuracy. To explain this success, we would need a finer and quantitative analysis of the increase of the spectral norm during the iterations, for instance by quantifying the gap
by means of a suitable constant \(0<\Theta <1\). As clarified in the proof of Lemma 19, the smaller the constant \(\Theta >0\) is, the larger is the increase of the spectral norm \(\Vert M_{j+1} \Vert > \Vert M_j\Vert \) between iterations of Algorithm 3. The following result is an attempt to gain a quantitative estimate for \(\Theta \) by injecting more information about the structure of the space \(\mathcal{W}\).
In order to simplify the analysis, let us assume \(\delta =0\) or \(\hat{\mathcal{W}}= \mathcal{W}\).
Proposition 25
Assume that \(\{W_\ell :=w_\ell \otimes w_\ell : \ell \in [m] \}\) forms a frame for \(\mathcal{W}\), i.e., there exist constants \(c_\mathcal{W},C_\mathcal{W}>0\) such that for all \(X \in \mathcal{W}\)
Denote \(\{ \tilde{W}_\ell : \ell \in [m] \}\) the canonical dual frame so that
for any symmetric matrix X. Then, for \(X \in \mathcal{W}\) and the notation \(\lambda _j:=\lambda _j(X)\), \(\lambda _1 =\Vert X\Vert \) and \(u_j:=u_j(X)\), we have
Proof
Let us fix \(X \in \mathcal{W}\). Then we have two ways of representing X, its frame decomposition and its spectral decomposition:
By using both the decompositions and again the notation \(W_\ell = w_\ell \otimes w_\ell \), we obtain
By observing that \(\sum _{\ell =1}^m\langle u_j \otimes u_j, \tilde{W}_\ell \rangle _F^2 \le A^{-1} \Vert P_{\mathcal{W}}(u_j \otimes u_j)\Vert _F^2\) (canonical dual frame upper bound), and using Cauchy–Schwarz inequality, we can further estimate
where in the last inequality we applied the estimates
The meaning of estimate (42) is explained by the following mechanism: Whenever the deviation of an iteration \(M_j\) of Algorithm 3 from being a rank-1 matrix in \(\mathcal{W}\) is large, in the sense that \(\Vert P_{\mathcal{W}} (u_1 \otimes u_1) \Vert _F\) is small, the constant \(\Theta = \left( \frac{C_\mathcal{W}}{c_\mathcal{W}}\right) ^{1/2} \left( \sum _{\lambda _j >0} \lambda _j \Vert P_{\mathcal{W}} (u_j \otimes u_j) \Vert _F \right) \) is also small and the iteration \(M_{j+1} = F_\gamma (M_j)\) will efficiently increase the spectral norm. The gain will reduce as soon as the iteration \(M_j\) gets closer and closer to a rank-1 matrix. It would be perhaps possible to get an even more precise analysis of the behavior of Algorithm 3, by considering simultaneously the dynamics of (the gaps between) different eigenvalues (not only focusing on \(\lambda _1\)). Unfortunately, we could not find yet a proper and conclusive argument.
4 Numerical Experiments About the Recovery of Network Profiles
In this section, we present numerical experiments about the recovery of network weights \(\{a_i : i \in [m_0]\}\) and \(\{v_\ell : \ell \in [m_1]\}\) from few point queries of the network. The recovery procedure leverages the theoretical insights that have been provided in previous sections. Without much loss of generality, we neglect the active subspace reduction and focus on the case \(d = m_0\). We construct an approximation \(P_{\hat{\mathcal{W}}} \approx P_{\mathcal{W}}\) using Algorithm 2. Then we randomly generate a number of matrices \(\{M^k_0: k \in [K]\} \in \hat{\mathcal{W}}\cap \mathbb {S}\) and compute the sequences \(M^k_{j+1} = F_{\gamma }(M_j^k)\) as in Algorithm 3. For each limiting matrix \(\{M_{\infty }^{k}: k \in [K]\}\), we compute the largest eigenvector \(u_1(M_{\infty }^{k})\), and then cluster \(\{u_1(M_{\infty }^{k}): k \in [K]\}\) into \(m=m_0+ m_1\) classes using kMeans++. After projecting the resulting cluster centers onto \(\mathbb {S}^{d-1}\), we obtain vectors \(\{\hat{w}_j: j \in [m_0+ m_1]\}\) that are used as approximations to \(\{a_i: i\in [m_0]\}\) and \(\{v_\ell : \ell \in [m_1]\}\).
We perform experiments for different scenarios, where either the activation function or the construction of the network weights varies. Guided by our theoretical results, we pay particular attention to how the network architecture, e.g., \(m_0\) and \(m_1\), influences the simulation results. The entire procedure is rather flexible and can be adjusted in different ways, e.g., changing the distribution \(\mu _X\). To provide a fair account of the success, we fix hyperparameters of the approach throughout all experiments. Test scenarios, hyperparameters, and error measures are reported below in more detail. Afterwards, we present and discuss the results.
Scenarios and construction of the networks The network is constructed by choosing activation functions and network weights \(\{a_i : i \in [m_0]\}\), \(\{b_\ell : \ell \in [m_1]\}\), for which \(v_\ell \) is then defined via \(v_\ell = \frac{AG_0 b_\ell }{\left\| {AG_0 b_\ell }\right\| _2}\), see Definition 3. To construct activation functions, we set \(g_i(t) = \phi (t + \theta _i)\) for \(i \in [m_0]\), and \(h_\ell (t) = \phi (t + \tau _\ell )\) for \(\ell \in [m_1]\). We choose either \(\phi (t) = \tanh (t)\) or \(\phi (t) = \frac{1}{1+e^{-t}} - \frac{1}{2}\) (shifted sigmoid function), and sample offsets (called also biases) \(\theta _i\), \(\tau _\ell \) independently at random from \(\mathcal{N}(0,0.01)\).
As made clear by our theory, see Theorem 16, a sufficient condition for successful recovery of the entangled weights is \(\nu =C_F-1\) to be small, where \(C_F\) is the upper frame constant of the entangled weights as in Definition 3. In the following numerical experiments, we wish to verify how crucial is this requirement. Thus, we test two different scenarios for the weights. The first scenario, which is designed to best fulfill the sufficient condition \(\nu \approx 0\), models both \(\{a_i : i \in [m_0]\}\) and \(\{b_\ell : \ell \in [m_1]\}\) as perturbed orthogonal systems. For their construction, we first sample orthogonal bases uniformly at random and then apply a random perturbation. The perturbation is such that \((\sum _{i=1}^{m_0}(\sigma _i(A) - 1)^2)^{-1/2} \approx (\sum _{i=1}^{m_1}(\sigma _i(B) - 1)^2)^{-1/2} \approx 0.3\), where \(\sigma _i(A)\) and \(\sigma _i(B)\) denote singular values of A and B. In the second case, we sample the (entangled) weights independently from \({\text {Uni}}({\mathbb {S}^{m_0-1}})\). In this situation, as the dimensionality \(d=m_0\) is relatively small, the system will likely not fulfill well the condition \(\nu \approx 0\); however, as the dimension \(d=m_0\) is chosen larger, the weights tend to be more incoherent and gradually approaching the previous scenario.
Hyperparameters Unless stated differently, we sample \(m_X = 1000\) Hessian locations from \(\mu _X = \sqrt{m_0}\text {Uni}\left( \mathbb {S}^{m_0-1}\right) \), and use \(\epsilon = 10^{-5}\) in the finite difference approximation (17). We generate 1000 random matrices \(\{M_0^k: k \in [1000]\}\) by sampling \(m^k \sim \mathcal{N}(0,\mathsf {Id}_{m_0+ m_1})\), and by defining \(M_0^k := \mathbb {P}_{\mathbb {S}}(\sum _{i=1}^{m_0+ m_1}m_i^k u_i)\), where the \(u_i\)’s are as in Algorithm 2. The constant \(\gamma = 2\) is used in the definition of \(F_{\gamma }\), and the iteration is stopped if \(\lambda _1(M_{j+1}^k) - \lambda _1(M_{j}^k) < 10^{-5}\), or after 200 iterations. kMeans++ is run with default settings using sklearn. All reported results are averages over 30 repetitions.
Error measures Three error measures are reported:
-
the normalized projection error \(\frac{\Vert \hat{P}_{\mathcal{W}} - P_{\mathcal{W}}\Vert _F^2}{m_0+ m_1}\),
-
a false positive rate \(\text {FP}(T)=\frac{\#\{j : E(\hat{w}_j) > T\}}{m_0+ m_1}\), where \(T>0\) is a threshold, and E(u) is defined by,
$$\begin{aligned} E(u) := \min _{w \in \{\pm a_i,\pm v_\ell : i \in [m_0],\ell \in [m_1]\}}\left\| {u - w}\right\| ^2_2, \end{aligned}$$ -
recovery rate \(\text {R}_a(T) = \frac{\#\{i : \mathcal{E}(a_i) < T\}}{m_0}\), and \(\text {R}_v(T) = \frac{\#\{\ell : \mathcal{E}(v_\ell ) < T\}}{m_1}\), where
$$\begin{aligned} \mathcal{E}(u)&:= \min _{w \in \{\pm \hat{w}_j : j \in [m_0+m_1] \}}\left\| {u - w}\right\| ^2_2. \end{aligned}$$
Results for perturbed orthogonal weights
The results of the study are presented in Figs. 2 and 3 and show that our procedure typically recovers many of the network weights, while suffering only few false positives. Considering for example a sigmoidal network, we have almost perfect recovery of the weights in both layers at a threshold of \(T = 0.05\) for any network architecture, see Fig. 3a, c. For a \(\tanh \)-network, the performance is slightly worse, but we still recover most weights in the second layer, and a large portion in the first layer at a reasonable threshold, see Fig. 3b, d.
Inspecting the plots more closely, we can notice some shared trends and differences between sigmoid and \(\tanh \) networks. In both cases, the performance improves when increasing the input dimensionality or, equivalently, the number of neurons in the first layer, even though the number of weights that need to be recovered increases accordingly. This is particularly the case for \(\tanh \)-networks as visualized in Fig. 3b, d and is most likely caused by reduced correlation of the weights in higher dimensions. As previously mentioned, the correlation is encoded within the constant \(\nu =C_F-1\) used in the analysis in Sect. 3.
For fixed \(m_0\) on the other hand, different activation functions react differently to changes of \(m_1\). For \(m_1\) larger, considering a sigmoid network, the projection error increases, and the recovery of weights in the second layer worsens as shown in Fig. 2a, c. This is expected by Theorem 5. Inspecting the results for \(\tanh \) networks, the projection error actually decreases when increasing \(m_1\), see Fig. 2b, and the recovery performance gets better. Figure 3d shows that especially weights in the first layer are more easily recovered if \(m_1\) is large, such that the case \(m_0= 45\), \(m_1= 23\) allows for perfect recovery at a threshold \(T = 0.05\). This behavior cannot be fully explained by our general theory, e.g., Theorem 5.
Results for random weights from the unit sphere When sampling the weights independently from the unit sphere, the recovery problem seems more challenging for moderate dimension \(d=m_0\) and for both activation functions. This confirms the expectation that the smallness of \(\nu = C_F-1\) is somehow crucial. Figure 4c, d suggest that especially recovering the weights of the second layer is more difficult than in the perturbed orthogonal case. Still, we achieve good performance in many cases. For sigmoid networks, Fig. 4c shows that we always recover most weights in the first layer, and a large portion of weights in the second layer if \(m_1/m_0\) is small. Moreover, keeping \(m_1/m_0\) constant while increasing \(m_0\) improves the performance significantly, as we expect from an improved constant \(\nu = C_F-1\). Figure 4a, c show almost perfect recovery for \(m_0= 45,\ m_1= 5\), while suffering only few false positives.
For \(\tanh \)-networks, Fig. 4d shows that increasing \(m_0\) benefits recovery of weights in both layers, while increasing \(m_1\) benefits recovery of first layer weights and harms recovery of second layer weights. We still achieve small false positive rates in Fig. 4b, and good recovery for \(m_0= 45\), and the trend continues when further increasing \(m_0\).
Finally, a notable difference between the perturbed orthogonal case and the unit-sphere case is the behavior of the projection error \(\Vert P_{\hat{\mathcal{W}}}- P_{\mathcal{W}}\Vert _F/(m_0+m_1)\) for networks with sigmoid activation function. Comparing Figs. 2a and 5a, the dependency of the projection error on \(m_1\) is stronger when sampling independently from the unit-sphere. This is explained by Theorem 5 since \(\left\| {B}\right\| ^2\) is independent of \(m_1\) in the perturbed orthogonal case and grows like \(\mathcal {O}(\sqrt{m_1})\) when sampling from the unit-sphere.
5 Open Problems
With the previous theoretical results of Sect. 3 and the numerical experiments of Sect. 4, we show how to reliably recover the entangled weights \(\{\hat{w}_j: j \in [m_0+m_1]\} \approx \{w_j: j \in [m_0+m_1]\} = \{a_i: i \in [m_0]\} \cup \{v_\ell : \ell \in [m_1]\}\). However, some issues remain open.
-
(i)
In Theorem 5, the dependency of \(\alpha >0\) on the network architecture and on the input distribution \(\mu _X\) is left implicit. However, it plays a crucial role for fully estimating the overall sample complexity.
-
(ii)
While we could prove that Algorithm 3 is increasing the spectral norm of its iterates in \(\hat{\mathcal{W}}\cap \mathbb S\), we could not show yet that it converges always to nearly rank-1 matrices in \(\hat{\mathcal{W}}\), despite it is so numerically observed, see also Remark 24. We also could not exclude the existence of spurious local minimizers of the nonlinear program (31), as stated in Theorem 16. However, we conjecture that there are none or that they are somehow hard to observe numerically.
-
(iii)
Obtaining the approximating vectors \(\{\hat{w}_j: j \in [m_0+m_1]\} \approx \{w_j: j \in [m_0+m_1]\} = \{a_i: i \in [m_0]\} \cup \{v_\ell : \ell \in [m_1]\}\) does not suffice to reconstruct the entire network. In fact, it is impossible a priori to know whether \(\hat{w}_j\) approximates one \(a_i\) or some other \(v_\ell \), up to sign and permutations, and the attribution to the corresponding layer needs to be derived from quering the network.
-
(iv)
Once we obtained, up to sign and permutations, \(\{\hat{a}_i: i \in [m_0]\} \approx \{a_i: i \in [m_0]\}\) and \( \{\hat{v}_\ell : \ell \in [m_1]\} \approx \{v_\ell : \ell \in [m_1]\}\) from properly grouping \(\{\hat{w}_j: j \in [m_0+m_1]\}\), it would remain to approximate/identify the activations functions \(g_i\) and \(h_\ell \). In the case where \(g_i(\cdot ) = \phi (\cdot - \theta _i)\) and \(h_\ell (\cdot ) = \phi (\cdot - \tau _\ell )\), this would simply mean to be able to identify the shifts \(\theta _i\), \(i \in [m_0]\), and \(\tau _\ell \), \(\ell \in [m_1]\). Such identification is also crucial for computing the matrix \(G_0={\text {diag}}(g_i'(0),\dots ,g_{m_0}'(0))\) which allows the disentanglement of the weights \(b_\ell \) from the weights A and \(v_\ell = AG_0 b_\ell /\Vert AG_0 b_\ell \Vert _2\). At this point, the network is fully reconstructed.
-
(v)
The generalization of our approach to networks with more than two hidden layers is clearly the next relevant issue to be considered as a natural development of this work.
While problems (i) and (ii) seem to be difficult to solve by the methods we used in this paper, we think that problems (iii) and (iv) are solvable both theoretically and numerically with just a bit more effort. For a self-contained conclusion of this paper, in the following sections we sketch some possible approaches to these issues, as a glimpse towards future developments, which will be more exhaustively included in [21]. The generalization of our approach to networks with more than two hidden layers as mentioned in (v) is surprisingly simpler than one may expect, and it is in the course of finalization [21]. For a network \(f(x):= f(x;W_1,\dots ,W_L) = 1^T g_L(W_L^T g_{L-1}(W_{L-1}^T \dots (g_1 (W_1^T x))\dots )\), with \(L>2\), again by second order differentiation is possible to obtain an approximation space
of the matrix space spanned by the tensors of entangled weights, where \(G_i\) are suitable diagonal matrices depending on the activation functions. The tensors \((W_kG_k \dots W_2 G_1 w_ {1, j}) \otimes (W_k^ TG_k \dots W_2^ T G_1 w_ {1, j})\) can be again identified by a minimal rank principle. The disentanglement goes again by a layer by layer procedure as in this paper, see also [5].
6 Reconstruction of the Entire Network
In this section, we address problems (iii) and (iv) as described in Sect. 5. Our final goal is of course to construct a two-layer network \(\hat{f}\) with number of nodes equaling \(m_0\) and \(m_1\) such that \(\hat{f} \approx f\). Additionally we also study whether the individual building blocks (e.g., matrices \(\hat{A}\), \(\hat{B}\), and biases in both layers) of \(\hat{f}\) match their corresponding counterparts of f.
To construct \(\hat{f}\), we first discuss how recovered entangled weights \(\{\hat{w}_{i} : i \in [m_0+m_1]\}\) (see Sect. 4) can be assigned to either the first, or the second layer, depending on whether \(\hat{w}_{j}\) approximates one of the \(a_i\)’s, or one of the \(v_{\ell }\)’s. Afterwards we discuss a modified gradient descent approach that optimizes the deparametrized network (its entangled weights are known at this point!) over the remaining, unknown parameters of the network function, e.g., biases \(\theta _i\) and \(\tau _\ell \).
6.1 Distinguishing First and Second Layer Weights
Attributing approximate entangled weights to first or second layer is generally a challenging task. In fact, even the true weights \(\{a_i: i \in [m_0] \}\), \(\{v_\ell : \ell \in [m_1]\}\) cannot be assigned to the correct layer based exclusively on their entries when no additional a priori information (e.g., some distributional assumptions) is available. Therefore, assigning \(\hat{w}_{j}\), \(j \in [m_0+ m_1]\) to the correct layer requires using again the network f itself and thus to query additional information.
The strategy we sketch here is designed for sigmoidal activation functions and networks with (perturbed) orthogonal weights in each layer. Sigmoidal functions are monotonic, have bell-shaped first derivative, and are bounded by two horizontal asymptotes as the input tends to \(\pm \infty \). If activation functions \(\{g_i: i \in [m_0]\}\) and \(\{h_\ell : \ell \in [m_1]\}\) are translated sigmoidal, their properties imply
whenever any direction w has nonzero correlation \(a_i^T w \ne 0\) with each first layer neuron in \(\{a_i:i \in [m_0]\}\).
Assume now that \(\{a_i : i \in [m_0]\}\) is a perturbed orthonormal system, and that second layer weights \(\{b_{\ell } : \ell \in [m_1]\}\) are generic and dense (nonsparse). Recalling the definition \(v_\ell = AG_0b_\ell /\left\| {AG_0b_\ell }\right\| \), the vector \(v_\ell \) has, in this case, generally nonzero angle with each vector in \(\{a_i : i \in [m_0]\}\), while \(a_i^T a_j \approx 0\) for any \(i \ne j\). Utilizing this with observation (43), it follows that \(\left\| {\nabla f(t a_i)}\right\| \) is expected to tend to 0 much slower than \(\left\| {\nabla f(t v_{\ell }}\right\| \) as \(t\rightarrow \infty \). In fact, if \(\{a_i : i \in [m_0]\}\) was an exactly orthonormal system, \(\left\| {\nabla f(t a_i)}\right\| \) eventually would equal a positive constant when \(t\rightarrow \infty \). We illustrate in Fig. 6 the different behavior of the trajectories \(t \rightarrow \left\| {\nabla f(tw)}\right\| _2\) for \(w \in \{\hat{w}_j \approx a_i \text{ for } \text{ some } i\}\) and for \(w \in \{\hat{w}_j \approx v_\ell \text{ for } \text{ some } \ell \}\).
We illustrate the trajectories \(t \rightarrow \left\| {\nabla f(tw)}\right\| _2\) for \(w \in \{\hat{w}_j: j \in [m]\}\). The blue trajectories are those for \(w \in \{\hat{w}_j \approx a_i \text{ for } \text{ some } i\}\) and the red trajectories are those for \(w \in \{\hat{w}_j \approx v_\ell \text{ for } \text{ some } \ell \}\). We can observe the separation of the trajectories due to the different decay properties
Practically, for \(T \in \mathbb N\) and for each candidate vector in \(\{\hat{w}_{j} : j\in [m_0+m_1]\}\) we query f to compute \(\Delta _\epsilon f(t_k\hat{w}_j)\) for few steps \(\{t_{k} : k \in [T]\}\) in order to approximate
Then we compute a permutation \(\pi : [m] \rightarrow [m]\) to order the weights so that \(\hat{\mathcal{I}}(w_{\pi (i)}) \ge \hat{\mathcal{I}}(w_{\pi (j)})\) whenever \(\pi (i) > \pi (j)\). The candidates \(\{w_{\pi (j)} : j=1,\ldots ,m_0\}\) have the slowest decay, respectively, largest norms and are thus assigned to the first layer. The remaining candidates \(\{w_{\pi (\ell )}: \ell =m_0+1,\ldots ,m_1\}\) are assigned to the second layer.
Numerical experiments We have applied the proposed strategy to assign vectors \(\{\hat{w}_{j} : j \in [m_0+m_1]\}\), which are outputs of experiments conducted in Sect. 4, to either the first or the second layer. Since each \(\hat{w}_j\) does not exactly correspond to a vector in \(\{a_i : i \in [m_0]\}\) or \(\{v_\ell : \ell \in [m_1]\}\), we assign a ground truth label \(L_j = 1\) to \(\hat{w}_j\) if the closest vector to \(\hat{w}_j\) belongs to \(\{a_i : i \in [m_0]\}\), and \(L_j = 2\) if it belongs to the set \(\{v_\ell : \ell \in [m_1]\}\). Denoting similarly the predicted label \(\hat{L}_j = 1\) if \(\pi (j) \in \{1,\ldots ,m_0\}\) and \(\hat{L}_j = 2\) otherwise, we compute the success rates
to assess the proposed strategy. Hyperparameters are \(\epsilon = 10^{-5}\) for the step length in the finite difference approximation \(\Delta _\epsilon f(\cdot )\), and \(t_k = - 20 + k\) for \(k \in [40]\).
The results for all four scenarios considered in Sect. 4 are reported in Table 1. We see that our simple strategy achieves remarkable success rates, in particular if the network weights in each layer represent perturbed orthogonal systems. If the weights are sampled uniformly from the unit sphere with moderated dimension \(d=m_0\), then, as one may expect, the success rate drops. In fact, for small \(d=m_0\), the vectors \(\{a_i : i \in [m_0]\}\) tend to be less orthogonal, and thus, the assumption \(a_i^T a_j \approx 0\) for \(i\ne j\) is not satisfied anymore.
Finally, we stress that the proposed strategy is simple, efficient and relies only on few additional point queries of f that are negligible compared to the recovery step itself (for reasonable query size T). In fact, the method relies on a single (nonlinear) feature of the map \(t \mapsto \left\| {\nabla f(t\hat{w}_j)}\right\| _2\) in order to decide upon the label of \(\hat{w}_j\). We identify it as an interesting future investigation to develop more robust approaches, potentially using higher-dimensional features of trajectories \( t \rightarrow \left\| {\nabla f(t\hat{w}_j)}\right\| _2\), to achieve high success rates even if \(a_i^T a_j \approx 0\) for \(i\ne j\) may not hold anymore.
6.2 Reconstructing the Network Function Using Gradient Descent
The previous section allows assigning unlabeled candidates \(\{\hat{w}_j : j \in [m_0+ m_1]\}\) to either the first or second layer, resulting in matrices \(\hat{A} = [\hat{a}_1|\ldots |\hat{a}_{m_0}]\) and \(\hat{V} = [\hat{v}_1|\ldots |\hat{v}_{m_1}]\) that ideally approximate A and V up to column signs and permutations. Assuming that the network \(f \in \mathcal{F}(m_0,m_0, m_1)\) is generated by shifts of one activation function, i.e., \(g_i(t) = \phi (t + \theta _i)\) and \(h_\ell (t) = \phi (t + \tau _{\ell })\) for some \(\phi \), this means only signs, permutations, and bias vectors \(\theta \in \mathbb {R}^{m_0}\), \(\tau \in \mathbb {R}^{m_1}\) are missing to fully reconstruct f. In this section, we show how to identify these remaining parameters by applying a gradient descent method to minimize the least squares of the output misfit of the deparametrized network. In fact, as we clarify below, the original network f can be explicitly described as a function of the known entangled weights \(a_i\) and \(v_\ell \) and of the unknown remaining parameters (signs, permutations, and biases), see Proposition 26 and Corollary 27.
Let now \(\mathcal{D}_{m}\) denote the set of \(m\times m\) diagonal matrices, and define a parameter space \(\Omega := \mathcal{D}_{m_1}\times \mathcal{D}_{m_0}\times \mathcal{D}_{m_0} \times \mathbb {R}^{m_0}\times \mathbb {R}^{m_1}\). To reconstruct the original network f, we propose to fit parameters \((D_1, D_2, D_3, w, z) \in \Omega \) of a function \(\hat{f}: \mathbb {R}^{m_0}\times \Omega \rightarrow \mathbb {R}\) defined by
to a number of additionally sampled points \(\{(X_i,Y_i) : i \in [m_f]\}\) where \(Y_i = f(X_i)\) and \(X_i \sim \mathcal{N}(0,\mathsf {Id}_{m_0})\). The parameter fitting can be formulated as solving the least squares
We note that, due to the identification of the entangled weights and deparametrization of the problem, \(\dim (\Omega ) = 3m_0+ 2m_1\), which implies that the least squares has significantly fewer free parameters compared to the number \(m_0^2 + (m_0\times m_1)+ (m_0+m_1)\) of original parameters of the entire network. Hence, our previous theoretical results of Sect. 3 and numerical experiments of Sect. 4 greatly scale down the usual effort of fitting all parameters at once. We may also mention at this point that the optimization (45) might have multiple global solutions due to possible symmetries, see also [20] and Remark 28, and we shall try to keep into account the most obvious ones in our numerical experiments.
We will now show that there exists parameters \((D_1, D_2, D_3, w, z) \in \Omega \) that allow for exact recovery of the original network, whenever \(\hat{A}\) and \(\hat{V}\) are correct up to signs and permutation. We first need the following proposition that provides a different reparametrization of the network using \(\hat{A}\) and \(\hat{V}\). The proof of the proposition requires only elementary linear algebra and properties of sign and permutation matrices. Details are deferred to Appendix 3.
Proposition 26
Let \(f \in \mathcal{F}(m_0,m_0, m_1)\) with \(g_i(t) = \phi (t + \theta _i)\) and \(h_\ell (t) = \phi (t + \tau _{\ell })\), and define the function \(\tilde{f}: \mathbb {R}^{m_0}\times \mathcal{D}_{m_0} \times \mathcal{D}_{m_1} \times \mathbb {R}^{m_0} \times \mathbb {R}^{m_1} \rightarrow \mathbb {R}\) via
If there are sign matrices \(S_A\), \(S_V\), and permutations \(\pi _A\), \(\pi _V\) such that \(A\pi _A = \hat{A} S_A\), \(V\pi _V = \hat{V} S_V\), then we have \(f(x) = \tilde{f}(x; S_A, S_V, \pi _A^T \theta , \pi _V^T \tau )\).
We note here that replacing \(\hat{f}\) by \(\tilde{f}\) in (45) is tempting because it further reduces the number of parameters (\(\dim (\mathcal{D}_{m_0} \times \mathcal{D}_{m_1} \times \mathbb {R}^{m_0} \times \mathbb {R}^{m_1}) = 2(m_0+ m_1)\)), but, by an explicit computation, one can show that evaluating the gradient of \(\tilde{f}\) with respect to D requires also the evaluation of \(D^{-1}\). Having in mind that D ideally converges to \(S_A\) during the optimization, diagonal entries of D are likely to cross zero while optimizing. Thus such minimization may result unstable, and we instead work with \(\hat{f}\). The following Corollary shows that also this form allows finding optimal parameters leading to the original network.
Corollary 27
Let \(f \in \mathcal{F}(m_0,m_0, m_1)\) with \(g_i(t) = \phi (t + \theta _i)\) and \(h_\ell (t) = \phi (t + \tau _{\ell })\). If there exist sign matrices \(S_A\), \(S_V\), and permutations \(\pi _A\), \(\pi _V\) such that \(A\pi _A = \hat{A} S_A\), \(V\pi _V = \hat{V} S_V\), there exist diagonal matrices \(D_1, D_2\) such that \(f(x) = \hat{f}(x;D_1, D_2, S_A, \pi _A^T \theta , \pi _V^T \tau )\).
Proof
Based on Proposition 26, we can rewrite \(f(x) = 1^T\phi (S_V \hat{B}^T \phi (S_A \hat{A}^T x + \pi _A^T w) + \pi _V^T z)\), so it remains to show that \(S_V \hat{B}^T = D_1 \hat{V}^T \hat{A}^{-T} D_2\) for diagonal matrices \(D_1\), \(D_2\). First we note
Using this, and \(D = S_A\) in the definition of \(\hat{B}\) in Proposition 26, it follows that
Multiplying by \(S_V\) from the left, we obtain
Remark 28
(Simplification for odd functions) If \(\phi \) in Proposition 26 satisfies \(\phi (-t) = -\phi (t)\), then \(\hat{f}(x; D_1, S D_2, S D_3, S w, z) = \hat{f}(x; D_1, D_2, D_3, w, z)\) for arbitrary sign matrix \(S \in \mathcal{D}_{m_0}\). Thus, choosing \(S = S_A\), there are also diagonal \(D_1\) and \(D_2\) with \(f(x) = \hat{f}(x; D_1, D_2, \mathsf {Id}_{m_0}, S_A \pi _A^T w, \pi _V^T \tau )\).
Assuming \(\hat{A}\) and \(\hat{V}\) are correct up to sign and permutation, Corollary 27 implies that \(J = 0\) is the global optimum, and it is attained by parameters leading to the original network f. Furthermore, Remark 28 implies that there is ambiguity with respect to \(D_3\), if \(\phi \) is an odd function. Thus we can also prescribe \(D_3 = \mathsf {Id}_{m_0}\) and neglect optimizing this variable if \(\phi \) is odd.
We now study numerically the feasibility of (45). First, we consider the case \(\hat{A} = A\) and \(\hat{V} = V\) to assess (45), isolated so not to suffer possible errors from other parts of our learning procedure (see Sects. 4 and 6.1). Afterwards we take into consideration also these additional approximations, and present results for \(\hat{A} \approx A\) and \(\hat{V} \approx V\).
Numerical experiments
We minimize (45) by standard gradient descent and learning rate 0.5 if \(\phi (t) = \frac{1}{1+e^{-t}} - \frac{1}{2}\) (shifted sigmoid), respectively, learning rate 0.025 if \(\phi (t) = \tanh (t)\). We sample \(m_f = 10(m_0+ m_1)\) additional points, which is only slightly more than the number of free parameters. Gradient descent is run for 500K iterations (due to small number of variables, this is not time consuming), and only prematurely stopped it, if the iteration stalls. Initially we set \(D_2 = D_3 = \mathsf {Id}_{m_0}\), and all other variables are set to random draws from \(\mathcal{N}(0,0.1)\).
Denoting \(\omega ^* = (D_1^*, D_2^*, D_3^*, w^*, z^*) \in \Omega \) as the gradient descent output, we measure the relative mean squared error (MSE) and the relative \(L_{\infty }\)-error
using \(m_{\text {test}} = 50{,}000\) samples \(Z_i \sim \mathcal{N}(0,\mathsf {Id}_{m_0})\). Moreover, we also report the relative bias errors
which indicate if the original bias vectors are recovered. We repeat each experiments 30 times and report averaged values.
Table 2 presents the results of the experiments and shows that we reconstruct a network function that is very close to the original network f in both \(L_2\) and \(L_{\infty }\) norm, and in every scenario. The maximal error is \(\approx 10^{-3}\), which is likely further reducible by increasing the number of gradient descent iterations, or using finer-tuned learning rates or acceleration methods. Therefore, the experiments strongly suggest that we are indeed reconstructing a function that approximates f uniformly well. Inspecting the errors \(E_{\theta }\) and \(E_{\eta }\) also supports this claim, at least in all scenarios where the \(\tanh \) activation is used. In many cases, the relative errors are below \(10^{-7}\), implying that we recover the original bias vectors of the network. Surprisingly, the accuracy of recovered biases slightly drops of few orders of magnitude in the sigmoid case, despite convincing results when measuring predictive performance in \(L_2\) and \(E_{\infty }\). We believe that this is due to faster flattening of the gradients around the stationary point compared to the case of a \(\tanh \) activation function and that it can be improved by using more sophisticated strategies of choosing a gradient descent step size. We also tested (45) when fixing \(D = \mathsf {Id}_{m_0}\) since \(\tanh \) and the shifted sigmoid are odd functions, and thus Remark 28 applies. The results are consistently slightly better than Table 2, but are qualitatively similar.
We ran similar experiments for perturbed orthogonal weights and when using \(\hat{A}\) and \(\hat{V}\) precomputed with the methods we described in Sects. 4 and 6.1. The quality of the results varies dependent on whether \(\hat{A} \approx A\) and \(\hat{V} \approx V\) (up to sign and permutation) holds, or a fraction of the weights has not been recovered. To isolate cases where \(\hat{A} \approx A\) and \(\hat{V} \approx V\) holds, we compute averaged MSE and \(L_{\infty }\) over all trials satisfying
We report the averaged errors and the number of trials satisfying this condition in Table 3. It shows that the reconstructed function is close to the original function, even if the weights are only approximately correct. Therefore we conclude that that minimizing (45) provides a very efficient way of learning the remaining network parameters from just few additional samples, once entangled network weights A and V are (approximately) known.
Notes
Below, with slight abuse of notation, we may use the symbol A also for the span of the weights \(\{a_i: i=1,\dots , m\}\).
References
Anandkumar, A., Ge, R., Janzamin, M.: Guaranteed non-orthogonal tensor decomposition via alternating rank-1 updates (2014). arXiv:1402.5180
Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge (2009)
Bach, F.: Breaking the curse of dimensionality with convex neural networks. J. Mach. Learn. Res. 18(1), 629–681 (2017)
Benedetto, J.J., Fickus, M.: Finite normalized tight frames. Adv. Comput. Math. 18(2–4), 357–385 (2003)
Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Advances in Neural Information Processing Systems, pp. 153–160 (2007)
Bhatia, R.: Matrix Analysis, vol. 169. Springer, Berlin (2013)
Blum, A., Rivest, R.L.: Training a 3-node neural network is np-complete. In: Advances in Neural Information Processing Systems, pp. 494–501 (1989)
Breuel, T.M., Ul-Hasan, A., Al-Azawi, M.A., Shafait, F.: High-performance OCR for printed English and Fraktur using LSTM networks. In: 12th international conference on document analysis and recognition, pp. 683–687 (2013)
Bruna, J., Mallat, S.: Invariant scattering convolution networks. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1872–1886 (2013)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017)
Casazza, P.G., Leonhard, N.: Classes of finite equal norm parseval frames. Contemp. Math. 451, 11–32 (2008)
Cireşan, D., Meier, U., Masci, J., Schmidhuber, J.: Multi-column deep neural network for traffic sign classification. Neural Netw. 32, 333–338 (2012)
Cohen, A., Daubechies, I., DeVore, R., Kerkyacharian, G., Picard, D.: Capturing ridge functions in high dimensions from point queries. Constr. Approx. 35(2), 225–243 (2012)
Constantine, P.G.: Active Subspaces: Emerging Ideas for Dimension Reduction in Parameter Studies, vol. 2. SIAM (2015)
Constantine, P.G., Dow, E., Wang, Q.: Active subspace methods in theory and practice: applications to kriging surfaces. SIAM J. Sci. Comput. 36(4), A1500–A1524 (2014)
De Silva, V., Lim, L.-H.: Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl. 30(3), 1084–1127 (2008)
DeVore, R.A., Oskolkov, K., Petrushev, P.: Approximation by feed-forward neural networks. Ann. Numer. Math. 4, 261–288 (1996)
Devroye, L., Gyorfi, L.: Nonparametric Density Estimation: The \(L_1\) View. Wiley Interscience Series in Discrete Mathematics. Wiley (1985)
Elbrächter, D., Perekrestenko, D., Grohs, P., Bölcskei, H.: Deep neural network approximation theory. IEEE Trans. Inf. Theory 67(5), 2581–2623 (2021)
Fefferman, C.: Reconstructing a neural net from its output. Rev. Mat. Iberoam. 10(3), 507–555 (1994)
Fiedler, C., Fornasier, M., Klock, T., Rauchensteiner, M.: Stable recovery of entangled weights: towards robust identification of deep neutral networks from minimal samples (2021). arXiv:2101.07150
Fornasier, M., Schnass, K., Vybiral, J.: Learning functions of few arbitrary linear parameters in high dimensions. Found. Comput. Math. 12(2), 229–262 (2012)
Fornasier, M., Vybíral, J., Daubechies, I.: Robust and resource efficient identification of shallow neural networks by fewest samplesgraphic. Inf. Infer. J. IMA (2021). https://doi.org/10.1093/imaiai/iaaa036
Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Birkhäuser, Basel (2013)
Gittens, A., Tropp, J.A.: Tail bounds for all eigenvalues of a sum of random matrices (2011). arXiv:1104.4513
Golowich, N., Rakhlin, A., Shamir, O.: Size-independent sample complexity of neural networks (2017). arXiv:1712.06541
Graves, A., Mohamed, A.-R., Hinton, G.: Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6645–6649 (2013)
Håstad, J.: Tensor rank is NP-complete. J. Algorithms 11(4), 644–654 (1990)
Hillar, C.J., Lim, L.-H.: Most tensor problems are NP-hard. J. ACM 60(6), 45 (2013)
Hristache, M., Juditsky, A., Spokoiny, V.: Direct estimation of the index coefficient in a single-index model. Ann. Stat. 29, 595–623 (2001)
Ichimura, H.: Semiparametric least squares (SLS) and weighted SLS estimation of single-index models. J. Econom. 58(1–2), 71–120 (1993)
Janzamin, M., Sedghi, H., Anandkumar, A.: Beating the perils of non-convexity: guaranteed training of neural networks using tensor methods (2015). arXiv:1506.08473
Judd, J.S.: Neural Network Design and the Complexity of Learning. MIT Press (1990)
Kawaguchi, K.: Deep learning without poor local minima. In: Advances in Neural Information Processing Systems, pp. 586–594 (2016)
Kolda, T.G.: Symmetric orthogonal tensor decomposition is trivial (2015). arXiv:1503.01375
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Li, K.-C.: On principal hessian directions for data visualization and dimension reduction: another application of Stein’s lemma. J. Am. Stat. Assoc. 87(420), 1025–1039 (1992)
Li, X.: Interpolation by ridge polynomials and its application in neural networks. J. Comput. Appl. Math. 144(1–2), 197–209 (2002)
Light, W.: Ridge functions, sigmoidal functions and neural networks. In: Approximation Theory VII, pp. 163–206 (1992)
Magnus, J.R.: On differentiating eigenvalues and eigenvectors. Econom. Theory 1(2), 179–191 (1985)
Mayer, S., Ullrich, T., Vybiral, J.: Entropy and sampling numbers of classes of ridge functions. Constr. Approx. 42(2), 231–264 (2015)
Mei, S., Misiakiewicz, T., Montanari, A.: Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit. In: Conference on Learning Theory (pp. 2388–2464). PMLR (2019)
Mondelli, M., Montanari, A.: On the connection between learning two-layers neural networks and tensor decomposition. In: The 22nd International Conference on Artificial Intelligence and Statistics (pp. 1051–1060). PMLR (2019)
Moravčík, M., Schmid, M., et al.: Deepstack: expert-level artificial intelligence in heads-up no-limit poker. Science 356(6337), 508–513 (2017)
Nakatsukasa, Y., Soma, T., Uschmajew, A.: Finding a low-rank basis in a matrix subspace. Math. Program. 162(1–2), 325–361 (2017)
Petrushev, P.P.: Approximation by ridge functions and neural networks. SIAM J. Math. Anal. 30(1), 155–189 (1998)
Pinkus, A.: Approximating by ridge functions. In: Surface Fitting and Multiresolution Methods, pp. 279–292 (1997)
Pinkus, A.: Approximation theory of the MLP model in neural networks. Acta Numer. 8, 143–195 (1999)
Qu, Q., Sun, J., Wright, J.: Finding a sparse vector in a subspace: linear sparsity using alternating directions. In: Advances in Neural Information Processing Systems, pp. 3401–3409 (2014)
Rellich, F., Berkowitz, J.: Perturbation Theory of Eigenvalue Problems. CRC Press (1969)
Robeva, E.: Orthogonal decomposition of symmetric tensors. SIAM J. Matrix Anal. Appl. 37(1), 86–102 (2016)
Rotskoff, G.M., Vanden-Eijnden, E.: Neural networks as interacting particle systems: asymptotic convexity of the loss landscape and universal scaling of the approximation error (2018). arXiv:1805.00915
Shaham, U., Cloninger, A., Coifman, R.R.: Provable approximation properties for deep neural networks. Appl. Comput. Harmon. Anal. 44(3), 537–557 (2018)
Shalev-Shwartz, S., Ben-David, S.: Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press (2014)
Silver, D., Huang, A., Maddison, C.J., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484 (2016)
Soudry, D., Carmon, Y.: No bad local minima: data independent training error guarantees for multilayer neural networks (2016). arXiv:1605.08361
Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: Benchmarking machine learning algorithms for traffic sign recognition. Man Comput. Neural Netw. 32, 323–332 (2012)
Stein, C.M.: Estimation of the mean of a multivariate normal distribution. Ann. Stat. 9, 1135–1151 (1981)
Stewart, G.W.: Perturbation theory for the singular value decomposition. Technical report (1998)
Sturm, I., Lapuschkin, S., Samek, W., Müller, K.-R.: Interpretable deep neural networks for single-trial EEG classification. J. Neurosci. Methods 274, 141–145 (2016)
Tao, T.: When are eigenvalues stable? (2008). https://terrytao.wordpress.com/2008/10/28/when-are-eigenvalues-stable/. Accessed 2019-09-29
Tao, T.: Topics in Random Matrix Theory, vol. 132. American Mathematical Soc. (2012)
Tropp, J.A.: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004)
van der Vaart, A.W., Wellner, J.A.: Weak Convergence and Empirical Processes. Springer Series in Statistics. Springer (1996)
Vershynin, R.: High-Dimensional Probability: An Introduction with Applications in Data Science, vol. 47. Cambridge University Press (2018)
Wedin, P.-Å.: Perturbation bounds in connection with singular value decomposition. BIT Numer. Math. 12(1), 99–111 (1972)
Wiatowski, T., Grohs, P., Bölcskei, H.: Energy propagation in deep convolutional neural networks. IEEE Trans. Inf. Theory 64(7), 4819–4842 (2017)
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Wolfgang Dahmen, Ronald A. Devore, and Philipp Grohs.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
The following Lemma implies that \(\{a_1\otimes a_1,\ldots a_{m_0}\otimes a_{m_0},v_1 \otimes v_1,\ldots ,v_{m_1}\otimes v_{m_1}\}\) satisfying the properties of Definition 3 is a system of linearly independent matrices.
Lemma 29
Let \(\{z_1,\ldots ,z_m\}\subset \mathbb {R}^{m}\) have unit norm and satisfy \(\sum _{i=1}^{m} \left\langle z_j, z_i\right\rangle ^2 \le C_F\) for all \(j=1,\ldots ,m\). If \(1< C_F< 2\), the system \(\{z_1\otimes z_1,\ldots ,z_{m}\otimes z_{m}\}\) is linearly independent.
Proof
Assume to the contrary the \(\{z_1\otimes z_1,\ldots ,z_m\otimes z_m\}\) are not linearly independent, then there exists \(\sigma \ne 0 \in \mathbb {R}^{m}\) with \(0 = \sum _{i=1}^{m}\sigma _i z_i\otimes z_i\), or equivalently \(0 = \sum _{i=1}^{m}\sigma _i \langle x, z_i\rangle ^2\) for all \(x \in \mathbb {R}^d\). Without loss of generality assume \(\left\| {\sigma }\right\| _{\infty } = \max _i \sigma _i\) (otherwise we multiply the representation by \(-1\)), and denote by \(i^*\) the index achieving the maximum. Then we have
where we used \(\left\| {z_{i^*}}\right\| = 1\). Since \(\min _{i} \sigma _i \ge 0\) immediately yields a contradiction, we continue with the case \(\min _{i} \sigma _i < 0\). We can further bound
and by division through \(\left( C_F- 1\right) \) and subtracting \(\min _{i} \sigma _i\) we obtain \(\left| {\min _{i} \sigma _i}\right| \ge \sigma _{i^*}(C_F- 1)^{-1}\). Since \((C_{F} - 1)^{-1} > 1\), this yields the contradiction \(\left\| {\sigma }\right\| _{\infty } \ge \left| {\min _{i} \sigma _i}\right| > \sigma _{i^*} = \left\| {\sigma }\right\| _{\infty }\).
The linear independence of the system \(\{a_1\otimes a_1,\ldots a_{m_0}\otimes a_{m_0},v_1 \otimes v_1,\ldots ,v_{m_1}\otimes v_{m_1}\}\) implies that it is a Riesz basis for \(\mathcal{W}:= {\text {span}}\{a_1\otimes a_1,\ldots a_{m_0}\otimes a_{m_0},v_1 \otimes v_1,\ldots ,v_{m_1}\otimes v_{m_1}\}\). As such there exists constants \(c_r\), \(C_R\) such that for every \(\sigma \in \mathbb {R}^{m_0+ m_1}\)
1.1 Additional Proofs for Section 2
Proof of Lemma 7
Fix any pair \(k,n \in [d]\) and define \(\phi (t) = f(x + t e_k + \epsilon e_n) - f(x + t e_k)\), where \(e_k\) denotes the k-th standard vector. By the mean value theorem and for \(\Delta ^2_{\epsilon }f(x) \in \mathbb {R}^{d\times d}\) given as in (17), there exist \(0< \xi _1, \xi _2 < \epsilon \) such that
Hence, we obtain
Assume k, n to be fixed and denote \(\tilde{x} = x + \xi _1 e_k + \xi _2 e_n\). By recalling our definition of \(\nabla ^2 f(x)\) in (10), it follows \(\frac{\partial ^2 f}{\partial x_k\partial x_n}(x) = \varphi _1(x) + \varphi _2(x)\), where
Thus
As before, we start by applying the Lipschitz continuity to the summands of \(\left| {\varphi _1(x) - \varphi _1(\tilde{x})}\right| \):
where \(\tilde{C} = \max {\lbrace \eta _2 \kappa _1 \kappa _2 ,\kappa _1^3 \eta _3\rbrace }\). Hence,
Now
Applying the triangle inequality of the Frobenius norm results in
The last inequalities are due to \(\Vert a_i\Vert _2 = 1\) for all \(i \in [m_0]\) and \(\Vert b_\ell \Vert _1 \le \sqrt{m_0}\Vert b_\ell \Vert _2 = \sqrt{m_0}\) for all \(\ell \in [m_1]\). A similar computation yields
Combining both results gives
Here we denote \(\xi _{1,kn}, \xi _{2, kn}\), to make clear that \(\xi _1, \xi _2\) are changing for every partial derivative of second order. However, all \(\xi _{1,kn}, \xi _{2, kn}\) are bounded by \(\epsilon \), so our result still holds. Applying the same procedure to \(\left| {\varphi _2(x) - \varphi _2(\tilde{x})}\right| \) yields
By setting \(\hat{C} = \max {\lbrace \eta _1\kappa _3, \kappa _1\kappa _2\eta _2\rbrace }\), we can can develop the same bounds for both parts of the right sum as for \(\varphi _1\), and get
Finally, we get
Setting \(C_{\Delta } = 16\max {\lbrace \tilde{C}, \hat{C}\rbrace }\) finishes the proof.
1.2 Additional Results and Proofs for Section 3
Lemma 30
Let \(\{w_\ell \otimes w_\ell : \ell \in [m]\}\) be a set of \(m < 2d - 1\) rank one matrices in \(\mathbb {S}\) such that any subset of \(\lceil m/2 \rceil + 1\) vectors \(\{w_{\ell _j}: j \in [\lceil m/2 \rceil +1]\}\) is linearly independent. Then for any \(X \in {\text {span}}{\{w_\ell \otimes w_\ell : \ell \in [m]\}} \cap \mathbb {S}\) with \({\text {rank}}(X)= 1\), there exists \(\ell ^*\) such that \(X = w_{\ell ^*}\otimes w_{\ell ^*}\).
Proof
Let \(X = \sum _{\ell =1}^{m}\alpha _\ell w_\ell \otimes w_\ell \in \mathbb {S}\), and denote \(\mathcal{I}= \{\ell \in [m] : \alpha _\ell \ne 0\}\). If \(1 < \left| {\mathcal{I}}\right| \le \lceil m/2 \rceil + 1\), the vectors \(\{w_{i}:i \in \mathcal{I}\}\) are linearly independent, and thus, \({\text {rank}}(X) = \left| {\mathcal{I}}\right| > 1\). Otherwise, we split \(\mathcal{I}= \mathcal{I}_1\cup \mathcal{I}_2\) with \(\left| {\mathcal{I}_1}\right| = \lceil m/2 \rceil + 1\) and \(\left| {\mathcal{I}_2}\right| \le m - \lceil m/2 \rceil -1 \le m/2 - 1\). If we accordingly split \(X=X_1 + X_2\) with \(X_j := \sum _{\ell \in \mathcal{I}_j}\alpha _\ell w_\ell \otimes w_\ell \), the assumption implies \({\text {rank}}(A_1) = \lceil m/2 \rceil + 1\) and \({\text {rank}}(A_2) \le m/2 - 1\). Since furthermore \({\text {rank}}(X) \ge {\text {rank}}(X_1) - {\text {rank}}(X_2)\), it follows that \({\text {rank}}(X) \ge \lceil m/2 \rceil + 1 - (m/2 - 1) \ge 2\).
Corollary 31
Assume \(m < 2d - 1\), and \(\{w_\ell : \ell \in [m]\}\) satisfies the upper frame bound (9) with \(\nu := C_F- 1 < \lceil \frac{m}{2}\rceil ^{-1}\). Then for \(X \in \mathcal{W}\cap \mathbb {S}\) of \({\text {rank}}(X) = 1\), there exists \(\ell ^*\) such that \(X = w_{\ell ^*}\otimes w_{\ell ^*}\).
Proof
To apply Lemma 30, we establish a lower bound for the size of the smallest linearly dependent subset of \(\{w_\ell : \ell \in [m]\}\), denoted commonly also by \(\text {spark}(\{w_\ell : \ell \in [m]\})\), see [63]. Following [63], it is bounded from below by
Using the frame property (9), we can bound
Taking additionally into account \(\nu < \lceil \frac{m}{2}\rceil ^{-1}\), it follows that
The result follows by applying Lemma 30.
Lemma 32
Let \(\mathcal{W},\ \hat{\mathcal{W}}\) be matrix subspaces of equal dimensions (e.g., subspaces of matrices in \(\mathbb {R}^{d\times d}\)) with corresponding orthogonal projections \(P_{\mathcal{W}},\ P_{\hat{\mathcal{W}}}\). Assume \(\delta := \left\| {P_{\mathcal{W}} - P_{\hat{\mathcal{W}}}}\right\| _F < 1\). For any \(W \in \mathcal{W}\) we have
In particular \(P_{\hat{\mathcal{W}}} : \mathcal{W}\rightarrow \hat{\mathcal{W}}\) is a bijection.
Proof
The right inequality follows by
Lemma 33
Let \(\{w_i : i \in [m]\} \subset \mathbb {R}^{d}\) be a set of unit-norm vectors satisfying the frame assumption
and some \(0< \nu < 1\). Let \(\mathcal{W}= {\text {span}}\{w_i\otimes w_i : i \in [m]\}\) with corresponding orthogonal projection \(P_{\mathcal{W}}\) and let \(\hat{\mathcal{W}}\subset \mathbb {R}^{d\times d}\) be a subspace of symmetric matrices with orthogonal projection \(P_{\hat{\mathcal{W}}}\) such that \(\delta := \left\| {P_{\mathcal{W}} - P_{\hat{\mathcal{W}}}}\right\| _F < 1\). Define \(\hat{W}_i := P_{\hat{\mathcal{W}}}(w_i)\), take \(M \in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(M = \sum _{i=1}^{m} \sigma _i \hat{W}_i\), and let \(Z \in \mathcal{W}\) satisfy \(M = P_{\hat{\mathcal{W}}}(Z)\). Then
Moreover, for any unit norm vector v and any \(\hat{W}_j\), we have
Proof
We first note that \(1 = \left\| {M}\right\| _F = \left\| {P_{\hat{\mathcal{W}}}(Z)}\right\| _F \ge (1-\delta )\left\| {Z}\right\| _F\) implies \(\left\| {Z}\right\| _F \le (1-\delta )^{-1}\). For (48), we assume without loss of generality \(\max \sigma _k = \left\| {\sigma }\right\| _{\infty }\) (otherwise we perform the proof for \(-M\)), and denote \(j = \arg \max _i \sigma _i\). Then we have
For (49), we first notice that
and thus is suffices to bound the last two terms. For the first term, we get
and for the second
For (50), we first rewrite
Now denote \(\Delta := \hat{W}_j - W_j\). Since \(W_j^2 = W_j\), we have
since \(\mathsf {Id}- W_j\) is a projection matrix onto \({\text {span}}\{w_j\}^{\perp }\).
Proof of Lemma 15
We first calculate a lower bound for \(\lambda _D\) in terms of the \(\min _i\sigma _i\) by
where \(C = c_f\) if \(\min _i \sigma _i > 0\) and \(C = C_F\) if \(\min _i \sigma _i \le 0\). We are left with bounding \(\sigma _{j^*} := \min _i \sigma _i\). Clearly, if \(\sigma _{j^*} > 0\), the result follows immediately. Therefore, we concentrate on the case \(\sigma _{j^*}\le 0 \) in the following. We first use (32) to get
Applying now Lemma 33, and \(\Vert \hat{W}_{j^*}\Vert \ge 1- \delta \), we obtain from (51)
Using this in the previously derived bound for \(\lambda _m\), and using \(C_F< 1+\nu \), we have
Since \(\delta , \nu < 1/4\) we obtain from (48) that \(\left\| {\sigma }\right\| _{\infty }\le 2\), and
Lemma 34
Let (A, d) be a metric space and \(F : A \rightarrow A\) be a continuous function. Let \((X_j)_{j \in \mathbb {N}}\) be a sequence generated by \(X_j = F^j(X_0)\) for some \(X_0 \in A\) and assume \(d(X_{j+1}, X_j) \rightarrow 0\). Then any convergent subsequence of \((X_j)_{j \in \mathbb {N}}\) converges to a fixed point of F.
Proof
Let \((X_{j_k})_{k \in \mathbb {N}}\) be a convergent subsequence of \((X_{j})_{j \in \mathbb {N}}\) with limit \(\bar{X} = \lim _{k\rightarrow \infty } X_{j_k}\). Then the subsequence \(X_{j_k + 1}\) satisfies \(d(X_{j_k + 1},\bar{X}) \le d(X_{j_k + 1}, X_{j_k}) + d(X_{j_k}, \bar{X}) \rightarrow 0\) as \(k\rightarrow \infty \) and thus also \((X_{j_k + 1})_{k \in \mathbb {N}}\) converges to \(\bar{X}\). By construction \(X_{j_k + 1} = F(X_{j_k})\). Taking the limit \(k\rightarrow \infty \) on both sides and using the continuity of F, we get
1.3 Proof of Proposition 26
Proof of Proposition 26
The first step is to replace first layer weights A by \(\hat{A} S_A\). This can be achieved by inserting the permutation \(\pi _A\) in the first layer and replacing by \(\hat{A} S_A\) according to
Next we need to replace the matrix \(\pi _A^T B\) using V respectively \(\hat{V}\). Let \(n \in \mathbb {R}^{m_1}\) be defined as \(n_\ell = \left\| {AGb_\ell }\right\| ^{-1}\), and \(N = {\text {diag}}(n)\). By definition of the entangled weights, we have \(V = AGBN\), implying the relation \(B = G^{-1}A^{-1}V N^{-1}\). Using assumptions \(A = \hat{A} S_A \pi _A^T\) and \(V = \hat{V}S_V\pi _V^T\), and the properties \(S_A^{-1} = S_A\), \(\pi _A^{-1} = \pi _A^T\), it follows that
Since \(G = {\text {diag}}(\phi '(\theta ))\), we have \(\pi _A^T G \pi _A = {\text {diag}}(\pi _A^T(\phi '(\theta ))) = {\text {diag}}((\phi '(\pi _A^T\theta )))=:\tilde{G}\). Inserting into (52), we get
The dot product with a 1-vector is permutation invariant; hence, we can get an additional \(\pi _V^T\) into the second layer. Then, using that the diagonal matrix \(\tilde{N} := \pi _V^T N \pi _V\) commutes with \(S_V\) we get
It remains to show that \(\hat{B} = \tilde{G}^{-1} S_A \hat{A}^{-1} \hat{V} \tilde{N}^{-1}\), which is implied if \(\tilde{N}_{\ell \ell } = \Vert \tilde{G}^{-1} S_A \hat{A}^{-1}\hat{v}_\ell \Vert \). By the normalization property \(\left\| {b_\ell }\right\| = 1\) (see Definition 3) and \(B = G^{-1}A^{-1}V N^{-1}\), we first have
Using this, and the assumptions \(A^{-1} = \pi _A S_A \hat{A}^{-1}\), \(V \pi _V = \hat{V} S_V\), we obtain
where we used that \(S_V\) affects \(v_{\ell }\) only by multiplication with \(\pm 1\). The result follows since \(\pi _A^T\) is orthogonal and thus \(\Vert G^{-1}\pi _A S_A \hat{A}^{-1}\hat{v}_\ell \Vert = \Vert \pi _A^T G^{-1}\pi _A S_A \hat{A}^{-1}\hat{v}_\ell \Vert = \Vert \tilde{G}^{-1} S_A \hat{A}^{-1}\hat{v}_\ell \Vert \).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Fornasier, M., Klock, T. & Rauchensteiner, M. Robust and Resource-Efficient Identification of Two Hidden Layer Neural Networks. Constr Approx 55, 475–536 (2022). https://doi.org/10.1007/s00365-021-09550-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00365-021-09550-5
Keywords
- Deep neural networks
- Active sampling
- Exact identifiability
- Deparametrization
- Frames
- Nonconvex optimization on matrix spaces