1 Introduction

Deep learning is perhaps one of the most sensational scientific and technological developments in the industry of the last years. Despite the spectacular success of deep neural networks (NN) outperforming other pattern recognition methods, achieving even superhuman skills in some domains [12, 36, 57] and confirmations of empirical successes in other areas such as speech recognition [27], optical character recognition [8], games solution [44, 55], the mathematical understanding of the technology of machine learning is in its infancy. This is not only unsatisfactory from a scientific, especially mathematical point of view, but it also means that deep learning currently has the character of a black box method, and its success can not be ensured yet by a full theoretical explanation. This leads to lack of acceptance in many areas, where interpretability is a crucial issue (like security, cf. [10]) or for those applications where one wants to extract new insights from data [60].

Several general mathematical results on neural networks have been available since the 1990s [2, 17, 38, 39, 46,47,48], but deep neural networks have special features and in particular superior properties in applications that still cannot be fully explained from the known results. In recent years, new interesting mathematical insights have been derived for understanding approximation properties (expressivity) [19, 53] and stability properties [9, 67] of deep neural networks. Several other crucial and challenging questions remain open.

A fundamental one is about the number of required training data to obtain a good neural network, i.e., achieving small generalization errors for future data. Classical statistical learning theory splits this error into bias and variance and gives general estimations by means of the so-called VC dimension or Rademacher complexity of the used class of neural networks [54]. However, the currently available estimates of these parameters [26] provide very pessimistic barriers in comparison to empirical success. In fact, the trade-off between bias and variance is function of the complexity of a network, which should be estimated by the number of sampling points to identify it uniquely. Thus, on the one hand, it is of interest to know which neural networks can be uniquely determined in a stable way by finitely many training points. On the other hand, the unique identifiability is clearly a form of interpretability.

The motivating problem of this paper is the robust and resource-efficient identification of feedforward neural networks. Unfortunately, it is known that identifying a very simple (but general enough) neural network is indeed NP-hard [7, 33]. Even without invoking fully connected neural networks, recent work [22, 41] showed that even the training of one single neuron (ridge function or single index model) can show any possible degree of intractability, depending on the distribution of the input. Recent results [3, 34, 42, 52, 56], on the other hand, are more encouraging and show that minimizing a square loss of a (deep) neural network does not have in general or asymptotically (for large number of neurons) poor local minima, although it may retain the presence of critical saddle points.

In this paper, we present conditions for a fully nonlinear two-layer neural network to be provably identifiable with a number of samples, which is polynomially depending on the dimension of the network. Moreover, we prove that our procedure is robust to perturbations. Our result is clearly of theoretical nature, but also fully constructive and easily implementable. To our knowledge, this work is the first, which allows provable de-parametrization of the problem of deep network identification, beyond the simpler case of shallow (one hidden) layer neural networks already considered in very recent literature [3, 23, 32, 34, 42, 43, 52, 56]. For the implementation, we do not require black box high-dimensional optimization methods and no concerns about complex energy loss landscapes need to be addressed, but only classical and relatively simple calculus and linear algebra tools are used (mostly function differentiation and singular value decompositions). The results of this paper build upon the work [22, 23], where the approximation from a finite number of sampling points has been already derived for the single neuron and one-layer neural networks. The generalization of the approach of the present paper to networks with more than two hidden layers is surprisingly simpler than one may expect, and it is in the course of finalization [21], see Sect. 5 (v) below for some details.

1.1 Notation

Let us collect here some notation used in this paper. Given any integer \(m \in \mathbb N\), we use the symbol \([m]:=\{1,2,\dots ,m \}\) for indicating the index set of the first m integers. We denote by \(B_1^d\) the Euclidean unit ball in \(\mathbb R^d\), \(\mathbb S^{d-1}\) the Euclidean sphere, and \(\mu _{\mathbb S^{d-1}}\) is its uniform probability measure. We denote \(\ell _q^d\) the d-dimensional Euclidean space endowed with the norm \(\Vert x\Vert _{\ell _q^d} =\left( \sum _{j=1}^d |x_j|^q \right) ^{1/q}\). For \(q=2\), we often write indifferently \(\Vert x\Vert = \Vert x\Vert _{2}= \Vert x\Vert _{\ell _2^d}\). For a matrix M, we denote \(\sigma _k(M)\) its \(k^{th}\) singular value. We denote \(\mathbb S\) the sphere of symmetric matrices of unit Frobenius norm \(\Vert \cdot \Vert _F\). The spectral norm of a matrix is denoted \(\Vert \cdot \Vert \). Given a closed convex set C, we denote \(P_C\) the orthogonal projection operator onto C. (Sometimes we use such operators to project onto subspaces of \(\mathbb R^d\) or subspaces of symmetric matrices or onto balls of such spaces.) For vectors \(x_1,\dots , x_k \in \mathbb R^d\), we denote the tensor product \(x_1 \otimes \dots \otimes x_k\) as the tensor of entries \(({x_1}_{i_1} \dots {x_k}_{i_k})_{i_1,\dots ,i_k}\). For the case of \(k=2\), the tensor product \(x \otimes y\) of two vectors \(x,y \in \mathbb R^d\) equals the matrix \(x y^T = (x_i y_j)_{ij}\). For any matrix \(M \in \mathbb {R}^{m \times n}\)

$$\begin{aligned} {\text {vec}}(M) := (m_{11}, m_{21}, \dots , m_{m1}, m_{12}, m_{22}, \dots , m_{mn})^T \in \mathbb {R}^{mn}, \end{aligned}$$

is its vectorization, which is the vector created by the stacked columns of M.

1.2 From One Artificial Neuron to Shallow and Deeper Networks

1.2.1 Meet the Neuron

The simplest artificial neural network \(f:\Omega \subset \mathbb {R}^d \rightarrow \mathbb {R}\) is a network consisting of exactly one artificial neuron, which is modeled by a ridge-function (or single-index model) f as

$$\begin{aligned} f(x) = \phi (a^T x + \theta ) =g(a^T x), \end{aligned}$$

where \(g:\mathbb {R}\rightarrow \mathbb {R}\) is the shifted activation function \(\phi ( \cdot + \theta )\) and the vector \(a \in \mathbb {R}^d\) expresses the weight of the neuron. Since the beginning of the 1990s [30, 31], there is a vast mathematical statistics literature about single-index models, which addresses the problem of approximating a and possibly also g from a finite number of samples of f to yield an expected least-squares approximation of f on a bounded domain \(\Omega \subset \mathbb {R}^d\). Now assume for the moment that we can evaluate the network f at any point in its domain, we refer to this setting as active sampling. As we aim at uniform approximations, we adhere here to the language of recent results about the sampling complexity of ridge functions from the approximation theory literature, e.g., [13, 22, 41]. In those papers, the identification of the neuron is performed by using approximate differentiation. Let us clarify how this method works as it will be of inspiration for the further developments below. For any \(\epsilon >0\), points \(x_i\), \(i=1,\dots m_{\mathcal X}\), and differentiation directions \(\varphi _j\), \(j=1,\dots m_{\Phi }\) we have

$$\begin{aligned} \frac{f (x_i + \epsilon \varphi _j)- f (x_i)}{\epsilon } \approx \frac{ \partial f (x_i)}{\partial \varphi _j} = g'(a^T x_i) a^T\varphi _j. \end{aligned}$$
(1)

Hence, differentiation exposes the weight of a neuron and allows to test it against test vectors \(\varphi _j\). The approximate relationship (1) forms for every fixed index i a linear system of dimensions \(m_\Phi \times d\), whose unknown is \(x^*_i=g'(a^T x_i) a\). Solving approximately and independently, the systems for \(i=1,\dots m_{\mathcal X}\) yield multiple approximations \(\hat{a}=x^*_i/\Vert x^*_i\Vert _2\approx a\) of the weight, the most stable of them with respect to the approximation error in (1) is the one for which \(\Vert x^*_i\Vert _2\) is maximal. Once \(\hat{a} \approx a\) is learned, then one can easily construct a function \(\hat{f}(x) = \hat{g}(\hat{a}^T x)\) by approximating \(\hat{g}(t) \approx f(\hat{a} t)\) on further sampling points. Under assumptions of smoothness of the activation function \(g\in C^s([0,1])\), for \(s>1\), \(g'(0) \ne 0\) and compressibility of the weight, i.e., \(\Vert a\Vert _{\ell _q^d}\) is small for \(0<q \le 1\), then by using L sampling points of the function f and the approach sketched above, one can construct a function \(\hat{f}(x) = \hat{g}(\hat{a}^T x)\) such that

$$\begin{aligned} \Vert f-\hat{f}\Vert _{C(\Omega )}\le C\Vert a\Vert _{\ell _q^d}\left\{ L^{-s}+\Vert g\Vert _{C^s([0,1])} \left( \frac{1+\log (d/L)}{L}\right) ^{1/q-1/2}\right\} . \end{aligned}$$

In particular, the result constructs the approximation of the neuron with an error, which has polynomial rate with respect to the number of samples, depending on the smoothness of the activation function and the compressibility of the weight vector a. The dependence on the input dimension is only logarithmical. To take advantage of the compressibility of the weight, compressive sensing [24] is a key tool to solve the linear systems (1). In [13] , such an approximation result was obtained by active and deterministic choice of the input points \(x_i\). In order to relax a bit the usage of active sampling, in the paper [22] a random sampling of the points \(x_i\) has been proposed and the resulting error estimate would hold with high probability. The assumption \(g'(0) \ne 0\) is somehow crucial, since it was pointed out in [22, 41] that any level of tractability (polynomial complexity) and intractability (super-polynomial complexity) of the problem may be exhibited otherwise.

1.2.2 Shallow Networks: The One-Layer Case

Combining several neurons leads to richer function classes [38, 39, 46,47,48]. A neural network with one hidden layer and one output is simply a weighted sum of neurons whose activation function only differs by a shift, i.e.,

$$\begin{aligned} f(x) = \sum ^{m}_{i=1} b_i \phi (a_i^T x + \theta _i) = \sum _{i=1}^m g_i (a_i^Tx), \end{aligned}$$
(2)

where \(a_i \in \mathbb {R}^m\) and \(b_i, \theta _i \in \mathbb {R}\) for all \(i = 1, \dots , m\). Sometimes, it may be convenient below the more compact writing \(f(x) =1^T g(A^T x)\) where \(g=(g_1,\dots , g_m)\) and \(A=[a_1|\dots |a_m] \in \mathbb R^{d \times m}\).Footnote 1 Differently from the case of the single neuron, the use of first-order differentiation

$$\begin{aligned} \nabla f(x) = \sum ^m_{i=1} g_i'(a_i^T x)a_i \in A = {\text {span}}\left\{ a_1, \dots , a_m \right\} , \end{aligned}$$
(3)

may furnish information about \(A = {\text {span}}\left\{ a_1, \dots , a_m \right\} \) (active subspace identification [14, 15], see also [22, Lemma 2.1]), but it does not allow yet to extract information about the single weights \(a_i\). For that higher-order information is needed. Recent work shows that the identification of a network (2) can be related to tensor decompositions [1, 23, 32, 43]. As pointed out in Sect. 1.2.1, differentiation exposes the weights. In fact, one way to relate the network to tensors and tensor decompositions is given by higher-order differentiation. In this case, the tensor takes the form

$$\begin{aligned} D^k f(x) = \sum ^m_{i=1} g_i^{(k)}(x) \underbrace{a_i \otimes \dots \otimes a_i}_{k-\text {times}}, \end{aligned}$$

which requires that the \(g_i\)’s are sufficiently smooth. In a setting where the samples are actively chosen, it is generally possible to approximate these derivatives by finite differences. However, even for passive sampling there are ways to construct similar tensors [23, 32], which rely on Stein’s lemma [58] or differentiation by parts or weak differentiation. Let us explain how passive sampling in this setting may be used for obtaining tensor representations of the network. If the probability measure of the sampling points \(x_i\)’s is \(\mu _X\) with known (or approximately known [18]) density p(x) with respect to the Lebesgue measure, i.e., \(d \mu _X(x) = p(x) dx\), then we can approximate the expected value of higher order derivatives by using exclusively point evatuations of f. This follows from

$$\begin{aligned} \frac{1}{N} \sum ^N_{i = 1} f(x_i)(-1)^k \frac{\nabla ^k p(x_i)}{p(x_i)}&\approx \int _{\mathbb {R}^d} f(x)(-1)^k \frac{\nabla ^k p(x)}{p(x)} p(x) dx \\&= \int _{\mathbb {R}^d} \nabla ^k f(x) d\mu _X(x) = \mathbb {E}_{x\sim \mu _X}[\nabla ^k f(x)] \\&= \sum ^m_{i=1} \left( \int _{\mathbb {R}^d} g^{(k)}(a_i^T x) d\mu _X(x) \right) \underbrace{a_i \otimes \dots \otimes a_i}_{k-\text {times}}. \end{aligned}$$

In the work [32], decompositions of third-order symmetric tensors (\(k=3\)) [1, 35, 51] have been used for the weights identification of one hidden layer neural networks. Instead, beyond the classical results about principal Hessian directions [37], in [23] it is shown that using second derivatives \((k=2)\) actually suffices and the corresponding error estimates reflect positively the lower order and potential of improved stability, see e.g., [16, 28, 29]. The main part of the present work is an extension of the latter approach, and therefore, we will give a short summary of it with emphasis on active sampling, which will be assumed in this paper as the sampling method. The first step of the approach in [23] is taking advantage of (3) to reduce the dimensionality of the problem from d to m.

Reduction to the active subspace Before stating the core procedure, we want to introduce a simple and optional method, which can help to reduce the problem complexity in practice. Assume \(f: \Omega \subset \mathbb {R}^d \rightarrow \mathbb {R}\) takes the form (2), where \(d \ge m\) and that \(a_1, \dots , a_m\in \mathbb {R}^d\) are linearly independent. From a numerical perspective the input dimension d of the network plays a relevant role in terms of complexity of the procedure. For this reason in [23], the input dimension is effectively reduced to the number of neurons in the first hidden layer. With this reasoning, in the sections that follow we also consider networks where the input dimension matches the number of neurons of the first hidden layer.

Assume for the moment that the active subspace \(A={\text {span}}\left\{ a_1, \dots , a_m \right\} \) is known. Let us choose any orthonormal basis of A and arrange it as the columns of a matrix \(\hat{A} \in \mathbb {R}^{d \times m}\). Then

$$\begin{aligned} f(x) =f(P_A x) = f(\hat{A} \hat{A}^T x), \end{aligned}$$

which can be used to define a new network

$$\begin{aligned} \hat{f}(y) := f(\hat{A} y ) : \mathbb {R}^m \rightarrow \mathbb {R}. \end{aligned}$$

whose weights are \(\alpha _1 = \hat{A}^T a_1, \dots , \alpha _m = \hat{A}^T a_m\); all the other parameters remain unchanged. Note that \(\hat{A} \alpha _i = P_A a_i = a_i\), and therefore, \(a_i\) can be recovered from \(\alpha _i\). In summary, if the active subspace of f is approximately known, then we can construct \(\hat{f}\), such that the identification of f and \(\hat{f}\) are equivalent. This allows us to reduce the problem to the identification of \(\hat{f}\) instead of f, under the condition that we approximate \(P_A\) well enough [23, Theorem 1.1]. As recalled in (3), we can produce easily approximations to vectors in A by approximate first order differentiation of the original network f and, in an ideal setting, generating m linear independent gradients would suffice to approximate A. However, in general, there is no way to ensure a priori such linear independence and we have to account for the error caused by approximating gradients by finite differences. By suitable assumptions on f (see the full rank condition on the matrix J[f] defined in (4) below) and using Algorithm 1, we obtain the following approximation result.

figure a

Theorem 1

([23], Theorem 2.2) Assume the vectors \((a_i)_{i=1}^m\) are linear independent and of unit norm. Additionally, assume that the \(g_i\)’s are smooth enough. Let \(P_{\hat{A}}\) be constructed as described in Algorithm 1 by sampling \(m_{X} (d+1)\) values of f. Let \(0<s<1\), and assume that the matrix

$$\begin{aligned} J[f]:= & {} \mathbb {E}_{X \sim \mu _{\mathbb S^{d-1}}} \nabla f(X) \otimes \nabla f(X) \nonumber \\= & {} \int _{{\mathbb S}^{d-1}}\nabla f(x) \nabla f(x)^T d \mu _{{\mathbb S}^{d-1}}(x) \end{aligned}$$
(4)

has full rank, i.e., its m-th singular value fulfills \(\sigma _m\left( J[f]\right) \ge \alpha >0\). Then

$$\begin{aligned} \Vert P_A-P_{\hat{A}}\Vert _F \le \frac{2C_1\epsilon m}{\sqrt{\alpha (1-s)}-C_1\epsilon m}, \end{aligned}$$

with probability at least \(1 - m \exp \Bigl (-\frac{m_{X}\alpha s^2 }{2 m^2 C_2^2}\Bigr )\), where \(C_1, C_2 > 0\) are absolute constants depending on the smoothness of \(g_i\)’s.

Identifying the weights As clarified in the previous section, we can assume from now on that \(d=m\) without loss of generality. Let f be a network of the type (2), with twice differentiable activation functions \((g_i)_{i=1,\dots , m}\), and independent weights \((a_i)_{i=1,\dots m} \in \mathbb {R}^m\) of unit norm. Then f has second derivative

$$\begin{aligned} \nabla ^2 f(x) = \sum ^m_{i=1} g_i''(a_i^T x) a_i \otimes a_i \in \mathcal{A}={\text {span}}\left\{ a_1 \otimes a_1, \dots , a_m \otimes a_m\right\} , \end{aligned}$$
(5)

whose expression represents a nonorthogonal rank-1 decomposition of the Hessian. The idea is, first of all, to modify the network by an ad hoc linear transformation (withening) of the input

$$\begin{aligned} f(W^T x) = \sum _{i=1}^{m} g_i(a_i^T W^T x) \end{aligned}$$

in such a way that \((Wa_i/\Vert Wa_i\Vert _2)_{i=1,\dots , m}\) forms an orthonormal system. The computation of W can be performed by spectral decomposition of any positive definite matrix

$$\begin{aligned} G \in \hat{\mathcal{A}}\approx \mathcal{A}, \gamma I \preccurlyeq G. \end{aligned}$$

In fact, from the spectral decomposition of \(G = UDU^T\), we define \(W = D^{-\frac{1}{2}}U^T\) (see [23, Theorem 3.7]). This procedure is called whitening and allows to reduce the problem to networks with nearly-orthogonal weights, and presupposes to have obtained \( \hat{\mathcal{A}}\approx \mathcal{A}={\text {span}}\left\{ a_1 \otimes a_1, \dots , a_m \otimes a_m\right\} \). By using (5) and a similar approach as Algorithm 1 (one simply substitutes there the approximate gradients with vectorized approximate Hessians), one can compute \(\hat{\mathcal{A}}\) under the assumption that also the second-order matrix

$$\begin{aligned} H[f]:= & {} \mathbb {E}_{X \sim \mu _{\mathbb S^{m-1}}} {\text {vec}}(\nabla ^2 f (X)) \otimes {\text {vec}}(\nabla ^2 f(X))\\= & {} \int _{\mathbb S^{m-1}} {\text {vec}}(\nabla ^2 f (x)) \otimes {\text {vec}}(\nabla ^2 f(x)) \hbox {d} \mu _{\mathbb S^{m-1}}(x) \end{aligned}$$

is of full rank, where \({\text {vec}}(\nabla ^2 f (x))\) is the vectorization of the Hessian \(\nabla ^2 f (x)\).

After whitening one could assume without loss of generality that the vectors \((a_i)_{i=1,\dots m} \in \mathbb {R}^m\) are nearly orthonormal in the first place. Hence the representation (5) would be a near spectral decomposition of the Hessian and the components \(a_i \otimes a_i\) would represent the approximate eigenvectors. However, the numerical stability of spectral decompositions is ensured only under spectral gaps [6, 50]. In order to maximally stabilize the approximation of the \(a_i\)’s, one seeks for matrices \(M \in \hat{\mathcal{A}}\) with the maximal spectral gap between the first and second largest eigenvalues. This is achieved by the maximizers of the following nonconvex program

$$\begin{aligned} M = \arg \max \left\| {M}\right\| \quad \text {s.t.}\quad M \in \hat{\mathcal{A}},\quad \left\| {M}\right\| _F \le 1, \end{aligned}$$
(6)

where \(\Vert \cdot \Vert \) and \(\Vert \cdot \Vert _F\) are the spectral and Frobenius norms, respectively. This program can be solved by a suitable projected gradient ascent, see for instance [23, Algorithm 3.4] and Algorithm 3, and any resulting maximizer has the eigenvector associated to the largest eigenvalue in absolute value close to one of the \(a_i\)’s. Once approximations \(\hat{a}_i\) to all the \(a_i\)’s are retrieved, it is not difficult to perform the identification of the activation functions \(g_i\), see [23, Algorithm 4.1, Theorem 4.1]. The recovery of the network resulting from this algorithmic pipeline is summarized by the following statement.

Theorem 2

([23], Theorem 1.2) Let f be a real-valued function defined on the neighborhood of \(\Omega =B_1^d\), which takes the form

$$\begin{aligned} f(x) = \sum _{i=1}^m g_i(a_i \cdot x), \end{aligned}$$

for \(m\le d\). Let \(g_i\) be three times continuously differentiable on a neighborhood of \([-1,1]\) for all \(i=1,\dots ,m\), and let \(\{a_1,\dots ,a_m\}\) be linearly independent. We additionally assume both J[f] and H[f] of maximal rank m. Then, for all \(\epsilon >0\) (stepsize employed in the computation of finite differences), using at most \(m_{\mathcal X} [(d+1)+ (m+1)(m+2)/2]\) random exact point evaluations of f, the nonconvex program (6) constructs approximations \(\{\hat{a}_1,\dots ,\hat{a}_m\}\) of the weights \(\{a_1,\dots ,a_m\}\) up to a sign change for which

$$\begin{aligned} \bigg ( \sum _{i=1}^m \Vert \hat{a}_i-a_i\Vert _2^2 \bigg )^{1/2} \lesssim \varepsilon , \end{aligned}$$
(7)

with probability at least \(1 - m \exp \Bigl (-\frac{m_{\mathcal X}c }{2 \max \{C_1,C_2\}^2 m^2}\Bigr )\), for a suitable constant \(c>0\) intervening (together with some fixed power of m) in the asymptotical constant of the approximation (7). Moreover, once the weights are retrieved one constructs an approximating function \(\hat{f}:B_1^d \rightarrow \mathbb R\) of the form

$$\begin{aligned} \hat{f}(x) = \sum _{i=1}^m \hat{g}_i(\hat{a}_i \cdot x), \end{aligned}$$

such that

$$\begin{aligned} \Vert f- \hat{f}\Vert _{C(\Omega )} \lesssim \epsilon . \end{aligned}$$
(8)

While this result have been generalized to the case of passive sampling in [23] and through whitening allows for the identification of nonorthogonal weights, it is restricted to the case of \(m \le d\) and linearly independent weights \(\{a_i: i=1,\dots ,m\}\).

The main goal of this paper is generalizing this approach to account for both the identification of two fully nonlinear hidden layer neural networks and the case where \(m > d\) and the weights are not necessarily nearly orthogonal or even linearly independent (see Remark 10 below).

1.2.3 Deeper Networks: The Two-Layer Case

What follows further extends the theory discussed in the previous sections to a wider class of functions, namely neural networks with two hidden layers. By doing so, we will also address a relevant open problem that was stated in [23], which deals with the identification of shallow neural networks where the number of neurons is larger than the input dimension. First, we need a precise definition of the architecture of the neural networks we intend to consider.

Definition 3

Let \(0 < m_1\le m_0\le d\), \(\{a_1,\ldots ,a_{m_0}\}\subset \mathbb {S}^{d-1}\), \(\{b_1,\ldots ,b_{m_1}\}\subset \mathbb {S}^{m_0-1}\), and let \(g_1,\ldots ,g_{m_0}\) and \(h_1,\ldots ,h_{m_1}\) be univariate functions. Denote \(A := (a_1|\ldots |a_{m_0}) \in \mathbb {R}^{d\times m_0}\), \(B := (b_1|\ldots |b_{m_1}) \in \mathbb {R}^{m_0\times m_1}\), \(G_0 := {\text {diag}}\big ( g_1'(0),\ldots ,g_{m_0}'(0)\big )\), and assume the following:

  1. (A1)

    \(g_i'(0) \ne 0\) for all \( i \in [m_0]\),

  2. (A2)

    the system \(\{a_1,\ldots ,a_{m_0}, v_1,\ldots ,v_{m_1}\} \subset \mathbb {S}^{d-1}\) with \(v_\ell := \frac{AG_0 b_\ell }{\left\| {AG_0 b_\ell }\right\| }\) satisfies a frame condition

    $$\begin{aligned}&c_f\left\| {x}\right\| ^2 \le \sum \limits _{i=1}^{m_0} \left\langle x, a_i\right\rangle ^2 + \sum \limits _{\ell =1}^{m_1} \left\langle x, v_\ell \right\rangle ^2 \le C_F\left\| {x}\right\| ^2 \nonumber \\&\quad \text { for } 0< c_f\le c_F \text { and all } x \in {\text {span}}\{a_1,\dots ,a_{m_0}\}, \end{aligned}$$
    (9)
  3. (A3)

    the derivatives of \(g_i\) and \(h_\ell \) are uniformly bounded according to

    $$\begin{aligned} \quad \max \limits _{i=1,\ldots ,m_0}\sup \limits _{t \in \mathbb {R}}\left| {g_i^{(k)}(t)}\right| \le \kappa _{k}, \quad \text { and }\quad \max \limits _{i=1,\ldots ,m_1}\sup \limits _{t \in \mathbb {R}}\left| {h_\ell ^{(k)}(t)}\right| \le \eta _{k},\quad k=0,1,2,3. \end{aligned}$$

Then we define a set of two-layer networks by

$$\begin{aligned}&\mathcal{F}(d,m_0, m_1)\\&\quad := \left\{ f : \mathbb {R}^d \rightarrow \mathbb {R} : f(x)\! = \! \sum _{\ell =1}^{m_1} h_\ell \left( \sum _{i=1}^{m_0} b_{i\ell }g_i \left( a_i^T x\right) \right) ,(A1)\!-\!(A3) \text { are satisfied} \right\} , \end{aligned}$$

where \(b_{i\ell }\) is the \((i,\ell )\)-th entry of B, respectively, the i-th entry of vector \(b_\ell \).

Sometimes it may be convenient to use the more compact writing \(f(x) =1^T h(B^Tg(A^T x))\) where \(g=(g_1,\dots , g_{m_0})\), \(h=(h_1,\dots , h_{m_1})\). In the previous section, we presented a dimension reduction that can be applied to one layer neural networks and which can be useful to reduce the dimensionality from the input dimension to the number of neurons of the first layer. The same approach can be applied to networks defined by the class \(\mathcal{F}(d,m_0, m_1)\). For the approximation error of the active subspace, we end up with the following corollary of Theorem 1.

Corollary 4

(cf. Theorem 1) Assume that \(f \in \mathcal{F}(d,m_0, m_1)\) and let \(P_{\hat{A}}\) be constructed as described in Algorithm 1 by sampling \(m_X(d+1)\) values of f. Let \(0<s<1\) and assume that the \(m_0\)-th singular value of J[f] fulfills \( \sigma _{m_0}\left( J[f]\right) \ge \alpha >0\). Then we have

$$\begin{aligned} \Vert P_A - P_{\hat{A}}\Vert _F \le \frac{2 C_3 \epsilon m_0m_1}{\sqrt{(1-s)\alpha } - C_3 \epsilon m_0m_1}, \end{aligned}$$

with probability at least \(1-m_0\exp (-\frac{s^2 m_x \alpha }{2 C_4 m_1})\) and constants \(C_3, C_4 > 0\) that depend only on \(\kappa _j, \eta _j\) for \(j=0,\dots ,3\).

In view of Corollary 4, we can again apply [23, Theorem 1.1] and assume, without loss of generality, that \(d=m_0\), for which frame condition (9) automatically implies invertibility of A, as the vectors \(v_\ell \)’s are linear combinations of the \(a_i\)’s.

2 Approximating the Span of Tensors of Weights

In the one layer case, which was described earlier, the unique identification of the weights is made possible by constructing a matrix space whose rank-1 basis elements are outer products of the weight profiles of the network. This section illustrates the extension of this approach beyond shallow neural networks. Once again, we will make use of differentiation, and overall there will be many parallels to the approach in [23]. However, the intuition behind the matrix space will be less straightforward, because we cannot anymore directly express the second derivative of a two-layer network as a linear combination of symmetric rank-1 matrices. This is due to the fact that the Hessian matrix of a network \(f \in \mathcal{F}(m_0, m_0, m_1)\) has the form

$$\begin{aligned} \nabla ^2 f(x)&= \sum ^{m_1}_{\ell =1}h_\ell '(b_\ell ^T g(A^T x))\sum _{i=1}^{m_0} b_{i\ell }g_i''(a_i^T x) a_i\otimes a_i\\&\quad + \frac{1}{2}\sum ^{m_1}_{\ell = 1}\sum ^{m_0}_{i,j=1} h_\ell ''\left( b_\ell ^Tg(A^T x)\right) b_{i\ell }b_{j\ell }g_i'(a_i^T x)g_j'(a_j^T x) (a_i\otimes a_j + a_j\otimes a_i). \end{aligned}$$

Therefore, \(\nabla ^2 f(x) \in {\text {span}}{\lbrace a_i\otimes a_j + a_j\otimes a_i \;\vert \;i,j = 1,\dots ,m_0\rbrace }\), which has dimension \(\frac{m_0(m_0+1)}{2}\) and is in general not spanned by symmetric rank-1 matrices. This expression is indeed quite complicated, due to the chain rule and the mixed tensor contributions, which are consequently appearing. At a first look, it would seem impossible to use a similar approach as the one for shallow neural networks recalled in the previous section. Nevertheless a relatively simple algebraic manipulation allows to recognize some useful structure: For a fixed \(x \in \mathbb {R}^{m_0}\), we rearrange the expression as

$$\begin{aligned} \nabla ^2 f(x)&= \sum ^{m_1}_{\ell =1}h_\ell '(b_\ell ^T g(A^T x))\sum _{i=1}^{m_0} b_{i\ell }g_i''(a_i^T x) a_i\otimes a_i\\&\quad + \frac{1}{2}\sum ^{m_1}_{\ell = 1} h_\ell ''\left( b_\ell ^Tg(A^T x)\right) \left[ \sum ^{m_0}_{i=1}b_{i\ell }g_i'(a_i^T x)a_i\right] \otimes \left[ \sum ^{m_0}_{j=1} b_{j\ell }g_j'(a_j^T x)a_j\right] , \end{aligned}$$

which is a combination of symmetric rank-1 matrices since \(\sum ^{m_0}_{j=1} b_{j\ell }g_j'(a_j^T x)a_j \in \mathbb {R}^{m_0}\). We write the latter expression more compactly by introducing the notation

$$\begin{aligned} \nabla ^2 f(x) = \sum _{i=1}^{m_0} \gamma _{i}(x) a_i\otimes a_i + \sum ^{m_1}_{\ell = 1} \tau _\ell (x) v_{\ell }(x) \otimes v_{\ell }(x), \end{aligned}$$
(10)

where \(G_x = {\text {diag}}\left( g_1'(a_1^T x), \dots , g_{m_0}'(a_{m_0}^T x)\right) \in \mathbb {R}^{m_0\times m_0}\) and

$$\begin{aligned} v_{\ell }(x)&= A G_x b_\ell \in \mathbb {R}^{m_0}&\text {for }\ell \in [m_1], \end{aligned}$$
(11)
$$\begin{aligned} \gamma _i(x)&= g_i''(a_i^T x)\sum ^{m_1}_{\ell =1}h_\ell '(b_\ell ^T g(A^T x)) b_{i\ell } \in \mathbb {R}&\text { for } i \in [m_0], \end{aligned}$$
(12)
$$\begin{aligned} \tau _\ell (x)&= \frac{1}{2} h_\ell ''\left( b_\ell ^Tg(A^T x)\right) \in \mathbb {R}&\text { for } \ell \in [m_1]. \end{aligned}$$
(13)
Fig. 1
figure 1

Illustration of the relationship between \(\mathcal{W}\) (black line) and (light blue region) given by two nonlinear cones that fan out from \(\nabla ^2 f(0)\). There is no reason to believe that the these cones are symmetric around \(\mathcal{W}\). The gray cones show the maximal deviation of \(\hat{\mathcal{W}}\) from \(\mathcal{W}\)

Let us now introduce the fundamental matrix space

$$\begin{aligned} \mathcal{W}= \mathcal{W}(f) := {\text {span}}\left\{ a_1 \otimes a_1, \dots , a_{m_0} \otimes a_{m_0}, v_1 \otimes v_1, \dots , v_{m_1} \otimes v_{m_1} \right\} , \end{aligned}$$
(14)

where the \({\text {span}}\) is taken over \(\mathbb {R}\), \(a_1, \dots , a_{m_0}\) are the weight profiles of the first layer, and

$$\begin{aligned} v_\ell := \frac{v_\ell (0)}{\left\| {v_\ell (0)}\right\| _2} = \frac{A G_0 b_\ell }{\Vert A G_0 b_\ell \Vert _2}, \quad \text { for }\quad \ell \in [m_1], \end{aligned}$$

encode “entangled” information between A and B. For this reason, we call the \(v_\ell \)’s entangled weights. Let us stress at this point that the definition and the constructive approximation of the space \(\mathcal{W}\) is perhaps the most crucial and relevant contribution of this paper. In fact, by inspecting carefully the expression (10), we immediately notice that \(\nabla ^2 f(0) \in \mathcal{W}\), and also that the first sum in (10), namely \(\sum _{i=1}^{m_0} \beta _i(x) a_i\otimes a_i\), lies in \(\mathcal{W}\) for all \(x \in \mathbb {R}^{m_0}\). Moreover, for arbitrary sampling points x, deviations of \(\nabla ^2 f(x)\) from \(\mathcal{W}\) are only due to the second term in (10). The intuition is that for suitably centered distributions of sampling points \(x_1,\ldots ,x_{m_X}\), with \(a_j^T x_i \approx 0\) so that \(G_{x_i} \approx G_0\), the Hessians \(\{\nabla ^2 f(x_i): i \in [m_X]\}\) are distributed around the space \(\mathcal{W}\), see Fig. 1 for a two dimensional sketch of the geometrical situation. Hence, we would attempt an approximation of \(\mathcal{W}\) by PCA of a collection of such approximate Hessians. Practically, by active sampling (targeted evaluations of the network f) we first construct estimates \(\{\Delta _\epsilon ^2 f(x_i) : i \in [m_X]\}\) by finite differences of the Hessian matrices \(\{\nabla ^2 f(x_i) : i \in [m_X]\}\) (see Sect. 2.1), at sampling points \(x_1, \dots , x_{m_X} \in \mathbb {R}^m\) drawn independently from a suitable distribution \(\mu _X\). Next, we define the matrix

$$\begin{aligned} \hat{W} = \left( {\text {vec}}(\Delta _\epsilon ^2 f(x_1))| \ldots |{\text {vec}}(\Delta _\epsilon ^2 f(x_{m_X}))\right) \in \mathbb {R}^{m_0^2 \times m_X}, \end{aligned}$$

whose columns are the vectorization of the approximate Hessians. Finally, we produce the approximation \(\hat{\mathcal{W}}\) to \(\mathcal{W}\) as the span of the first \(m_0+m_1\) left singular vectors of the matrix \(\hat{W}\), where the choice \(m_0+m_1\) enforces \(\dim (\hat{\mathcal{W}}) = \dim (\mathcal{W}) = m_0+m_1\). The whole procedure of calculating \(\hat{\mathcal{W}}\) is given in Algorithm 2. It should be clear that the choice of \(\mu _X\) plays a crucial role for the quality of this method. In the analysis that follows, we focus on distributions that are centered and concentrated. Figure 1 helps to form a better geometrical intuition of the result of the procedure. It shows the region covered by the Hessians, indicated by the light blue area, which envelopes the space \(\mathcal{W}\) in a sort of nonlinear/nonconvex cone originating from \(\nabla ^2 f(0)\). In general, the Hessians do not concentrate around \(\mathcal{W}\) in a symmetric way, which means that the “center of mass” of the Hessians can never be perfectly aligned with the space \(\mathcal{W}\), regardless of the number of samples. In this analogy, the center of mass is equivalent to the space estimated by Algorithm 2, which essentially is a noncentered principal component analysis of observed Hessian matrices. The primary result of this section is Theorem 5, which provides an estimate of the approximation error of Algorithm 2 depending on the sub-Gaussian norm of the sample distribution \(\mu _X\) and the number of neurons in the respective layers. More precisely, this result gives a precise worst case estimate of the error caused by the imbalance of mass. For reasons mentioned above, the error does not necessarily vanish with an increasing number of samples, but the probability under which the statement holds will tend to 1. In Fig. 1, the estimated region is illustrated by the gray cones that show the maximal, worst-case deviation of \(\hat{\mathcal{W}}\). One crucial condition for Theorem 5 to hold is that there exists an \(\alpha >0\) such that

$$\begin{aligned} \sigma _{m_0+ m_1}\left( \mathbb {E}_{X \sim \mu _X}{\text {vec}}(\nabla ^2 f(X))\otimes {\text {vec}}(\nabla ^2 f(X))\right) \ge \alpha . \end{aligned}$$

This assumption ensures that the space spanned by the observed Hessians has, in expectation, at least dimension \(m_0+ m_1\). Aside from this technical aspect this condition implicitly helps to avoid network configurations, which are reducible, for certain weights can not be recovered. For example, we can define a network in \(\mathcal{F}(2,2,1)\) with weights given by

$$\begin{aligned} a_1 = \begin{pmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{pmatrix},\, a_2 = \begin{pmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{pmatrix},\ b_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}. \end{aligned}$$

It is easy to see that \(a_2\) will never be used during a forward pass through the network, which makes it impossible to recover \(a_2\) from the output of the network.

In the theorem below and in the proofs that follow, we will make use of the sub-Gaussian norm \(\left\| {\cdot }\right\| _{\psi _2}\) of a random variable. This quantity measures how fast the tails of a distribution decay and such a decay plays an important role in several concentration inequalities. More in general, for \(p\ge 1\), the \(\psi _p\)-norm of a scalar random variable Z is defined as

$$\begin{aligned} \left\| {Z}\right\| _{\psi _p} = \inf \left\{ t>0: \mathbb {E}\exp (|Z / t|^p )\le 2\right\} . \end{aligned}$$

For a random vector X on \(\mathbb {R}^d\) the \(\psi _p\)-norm is given by

$$\begin{aligned} \left\| {X}\right\| _{\psi _p} = \sup _{x \in \mathbb {S}^{d-1}} \left\| {\left| {\langle X, x \rangle }\right| }\right\| _{\psi _p}. \end{aligned}$$

The random variables for which \(\left\| {X}\right\| _{\psi _1} <\infty \) are called subexponential and those for which \(\left\| {X}\right\| _{\psi _2} <\infty \) are called sub-Gaussian. More in general, the Orlicz space \(L_{\psi _p}= L_{\psi _p}(\Omega , \Sigma , \mathbb P)\) consists of all real random variables X on the probabillity space \((\Omega , \Sigma , \mathbb P)\) with finite \(\left\| {X}\right\| _{\psi _p}\) norm and its elements are called p-subexponential random variables. Below, we mainly focus on sub-Gaussian random variables. In particular, every bounded random variable is sub-Gaussian, which covers all the cases we discuss in this work. We refer to [65] for more details. One example of a sub-Gaussian distribution is the uniform distribution on the unit sphere, \(X \sim {{\,\mathrm{Unif}\,}}(\mathbb {S}^{d-1})\), which has sub-Gaussian norm \(\left\| {X}\right\| _{\psi _2} = \frac{1}{\sqrt{d}}\).

figure b

Theorem 5

Let \(f \in \mathcal{F}(m_0, m_0, m_1)\) be a neural network within the class described in Definition 3 and consider the space \(\mathcal{W}\) as defined in (14). Assume that \(\mu _X\) is a probability measure with \(\mathrm{supp\ }(\mu _X) \subset B^{m_0}_1\), \(\mathbb {E}X = 0\), and that there exists an \(\alpha > 0\) such that

$$\begin{aligned} \sigma _{m_0+ m_1}\left( \mathbb {E}_{X \sim \mu _X} {\text {vec}}(\nabla ^2 f(X)) \otimes {\text {vec}}(\nabla ^2 f(X)) \right) \ge \alpha . \end{aligned}$$
(15)

Then, for any \(\epsilon >0\), Algorithm 2 returns a projection \(P_{\hat{\mathcal{W}}}\) that fulfills

$$\begin{aligned} \left\| {P_{\mathcal{W}^*} -P_{\hat{\mathcal{W}}}}\right\| _F \le \frac{\left( C_{\Delta }\epsilon m_1m_0^{\frac{3}{2}} + C\left\| {A}\right\| ^2 \left\| {B}\right\| ^2 \left\| {X}\right\| _{\psi _2}\sqrt{m_1\log (m_0+1)}\right) }{\sqrt{\frac{\alpha }{2}} - C_{\Delta }\epsilon m_1m_0^{\frac{3}{2}}}, \end{aligned}$$
(16)

for a suitable subspace \(\mathcal{W}^* \subset \mathcal{W}\) (we can actually assume that \(\mathcal{W}^* = \mathcal{W}\) according to Remark 6 below) with probability at least

$$\begin{aligned} 1 - 2 e^{-cm_1m_X} - (m_0+ m_1) e^{- \frac{\alpha }{8C_1 \left\| {A}\right\| ^4\left\| {B}\right\| ^4 m_1} m_X} \end{aligned}$$

where \(c>0\) is an absolute constant and \(C, C_1, C_{\Delta } >0 \) are constants depending on the constants \(\kappa _j, \eta _j\) for \(j=0,\dots ,3\).

Remark 6

If \(\epsilon >0\) is sufficiently small, due to (15) the space \(\hat{\mathcal{W}}\) returned by Algorithm 2 has dimension \(m_0+ m_1\). If the error bound (16) in Theorem 5 is such that \(\left\| {P_{\mathcal{W}^*} -P_{\hat{\mathcal{W}}}}\right\| _F < 1\), then \(\hat{\mathcal{W}}\) and \(\mathcal{W}^*\) must have the same dimension. Moreover, \(\mathcal{W}^* \subset \mathcal{W}\) and \({\text {dim}}(\mathcal{W}) = m_0+ m_1\) would necessarily imply that \(\mathcal{W}= \mathcal{W}^*\). Hence, for \(\left\| {P_{\mathcal{W}^*} -P_{\hat{\mathcal{W}}}}\right\| _F < 1\) and \(\epsilon >0\) sufficiently small, we have \(\mathcal{W}^* = \mathcal{W}\).

As already mentioned above, for \(\mu _X= {{\,\mathrm{Unif}\,}}(\mathbb {S}^{m_0-1})\) we have \(\left\| {X}\right\| _{\psi _2} = \frac{1}{\sqrt{m_0}}\). In this case, the error bound 16 behaves like

$$\begin{aligned} \left\| {P_{\mathcal{W}} -P_{\hat{\mathcal{W}}}}\right\| _F \le \mathcal O \left( \epsilon m_1m_0^{\frac{3}{2}} + \sqrt{\frac{m_1}{m_0}\log (m_0+1)}\right) , \end{aligned}$$

which is small for \(\epsilon >0\) small and \(m_0\gg m_1\). The latter condition seems favoring networks, for which the inner layer has a significantly larger number of neurons than the outer layer. This expectation is actually observed numerically, see Sect. 4. We have to add, though, that the parameter \(\alpha >0\) that intervenes in the error bound (16) might also depend on \(m_0, m_1\) (as it is in fact an estimate of an \((m_0+ m_1)^{th}\) singular value as in (15)). Hence, the dependency on the network dimensions is likely more complex and depends on the interplay between the input distribution \(\mu _X\) and the network architecture. In fact, at least judging from our numerical experiments, the error bound (16) is rather pessimistic, and it certainly describes a worst case analysis. One more reason might be that some crucial estimates in its proof could be significantly improved. Another reason could be the rather great generality of the activation functions of the networks, which we analyze in this paper, as described in Definition 3. Perhaps the specific instances used in the numerical experiments are enjoying better identification properties.

2.1 Estimating Hessians of the Network by Finite Differences

Before addressing the proof of Theorem 5, we give a precise definition of the finite differences we are using to approximate the Hessian matrices. Denote the i-th Euclidean canonical basis vector in \(\mathbb {R}^d\) by \(e_i\) and the second-order finite difference approximation of \(\nabla ^2 f(x)\) by

$$\begin{aligned} \Delta _\epsilon ^2 f(x)_{ij} := \frac{f(x + \epsilon e_i + \epsilon e_j) - f(x + \epsilon e_i) - f(x + \epsilon e_j) + f(x)}{\epsilon ^2} \end{aligned}$$
(17)

for \(i,j = 1,\dots , d=m_0\) and a step-size \(\epsilon >0\).

Lemma 7

Let \(f \in \mathcal{F}(m_0,m_0, m_1)\) be a neural network. Further assume that \(\Delta _\epsilon ^2 f(x)\) is constructed as in (17) for some \(\epsilon >0\). Then we have

$$\begin{aligned} \sup _{x \in B^{m_0}_1} \Vert \nabla ^2 f(x) - \Delta ^2_{\epsilon }{f}(x)\Vert _F \le C_{\Delta } \epsilon m_1m_0^{\frac{3}{2}}, \end{aligned}$$

where \(C_{\Delta }>0\) is a constant depending on the constants \(\kappa _j, \eta _j\) for \(j=0,\dots ,3\).

For the proof of Lemma 7, we simply use the Lipschitz continuity of the functions gh and of their derivatives, and make use of \(\left\| {a}\right\| _2, \left\| {b}\right\| _2 \le 1\). The details can be found in the Appendix (Sect. 1).

2.2 Span of Tensors of (Entangled) Network Weights: Proof of Theorem 5

The proof can essentially be divided into two separate bounds. Both will be addressed separately with the two lemmas below. For both lemmas, we will assume that \(X_1, \dots , X_{m_X} \sim \mu _X\) independently and that \(\mathrm{supp\ }(\mu _X) \subseteq B^{m_0}_1\). Additionally, we define the random matrices

$$\begin{aligned} W&:= ({\text {vec}}(\nabla ^2 f(X_1)) | \dots | {\text {vec}}(\nabla ^2 f(X_{m_X}))), \end{aligned}$$
(18)
$$\begin{aligned} \hat{W}&:= ({\text {vec}}(\Delta _\epsilon ^2 f(X_1)) | \dots | {\text {vec}}(\Delta _\epsilon ^2 f(X_{m_X})),\end{aligned}$$
(19)
$$\begin{aligned} W^*&:= ({\text {vec}}(P_{\mathcal{W}} \nabla ^2 f(X_1)) | \dots | {\text {vec}}(P_{\mathcal{W}} \nabla ^2 f(X_{m_X}))), \end{aligned}$$
(20)

where \(P_{\mathcal{W}}\) denotes the orthogonal projection onto \(\mathcal{W}\) (cf. (14)). For reader’s convenience, we recall here from (10) that the Hessian matrix of \(f\in \mathcal{F}(m_0, m_0, m_1)\) can be expressed as

$$\begin{aligned} \nabla ^2 f(x) = \sum _{i=1}^{m_0} \gamma _{i}(x) a_i\otimes a_i + \sum ^{m_1}_{\ell = 1} \tau _\ell (x) v_{\ell }(x) \otimes v_{\ell }(x), \end{aligned}$$

where \(\gamma _i(x), \tau _\ell (x), \) and \(v_\ell (x)\) are introduced in (11)–(13). We further simplify this expression by introducing the notations

$$\begin{aligned} V_x&= \left( v_1(x) | \dots | v_{m_1}(x)\right) =AG_x B,\\ \Gamma _x&= {\text {diag}}\left( \gamma _1(x), \dots , \gamma _{m_0}(x)\right) ,\\ T_x&= {\text {diag}}\left( \tau _1(x), \dots , \tau _{m_1}(x)\right) , \end{aligned}$$

which allow us to rewrite (10) in terms of matrix multiplications

$$\begin{aligned} \nabla ^2 f(x) = A \Gamma _x A^T + V_x T_x V_x^T. \end{aligned}$$

Lemma 8

Let \(f \in \mathcal{F}(m_0, m_0, m_1)\) and let \(\hat{W}, W^*\) be defined as in (19)–(20), where \(\mu _X\) satisfies \(\mathrm{supp\ }(\mu _X) \subseteq B^{m_0}_1\) and has sub-Gaussian norm \(\left\| {X}\right\| _{\psi _2}\). Then the bound

$$\begin{aligned} \left\| {\hat{W} - W^*}\right\| _F \le \sqrt{m_X}\left( C_{\Delta }\epsilon m_1m_0^{\frac{3}{2}} + C\left\| {A}\right\| ^2 \left\| {B}\right\| ^2 \left\| {X}\right\| _{\psi _2}\sqrt{m_1\log (m_0+1)}\right) \end{aligned}$$

holds with probability at least \( 1- 2 \exp \left( -c m_1m_X \right) \), where \(c > 0\) is an absolute constant and \(C, C_{\Delta }>0\) depend only on the constants \(\kappa _j, \eta _j\) for \(j=0,\dots ,3\).

Proof

By triangle inequality, we get

$$\begin{aligned} \left\| {\hat{W} - W^*}\right\| _F&\le \left\| {\hat{W} - W}\right\| _F + \left\| { W^*-W}\right\| _F. \end{aligned}$$
(21)

For the first term on the right-hand side, we can use the worst-case estimate from Lemma 7 to get

$$\begin{aligned} \left\| {\hat{W} - W}\right\| _F \le \sqrt{m_X} \sup _{x \in B_1^{m_0}}\Vert \Delta _\epsilon ^2 f(x) -\nabla ^2 f(x)\Vert _F \le \sqrt{m_X} C_{\Delta }\epsilon m_1m_0^{\frac{3}{2}} \end{aligned}$$
(22)

for some constant \(C_{\Delta }>0\). The second term in (21) can be bounded by (the explanation of the individual identities and estimates follows immediately below)

$$\begin{aligned} \left\| {W - W^*}\right\| _F^2&= \sum ^{m_X}_{i=1}\left\| {{\text {vec}}(\nabla ^2 f(X_i)) - {\text {vec}}(P_{\mathcal{W}} \nabla ^2 f(X_i))}\right\| _2^2 \\&= \sum _{i=1}^{m_X} \left\| {V_{X_i}T_{X_i}V_{X_i}^T - V_0 T_{X_i} V_0^T}\right\| ^2_F \le \sum _{i=1}^{m_X} 4\left\| {(V_{X_i} - V_{0})T_{X_i}V_{X_i}^T}\right\| _F^2\\&\le 4 \sum _{i=1}^{m_X} \left\| {V_{X_i} - V_{0}}\right\| ^2 \left\| {T_{X_i}}\right\| _F^2 \left\| {V_{X_i}}\right\| ^2 \\&\le 4\left\| {A}\right\| ^4 \left\| {B}\right\| ^4 \sum ^{m_X}_{i=1} \left\| {G_{X_i}-G_{0}}\right\| ^2 \left\| {T_{X_i}}\right\| _F^2 \left\| {G_{X_i}}\right\| ^2 \\&\le 4\left\| {A}\right\| ^4 \left\| {B}\right\| ^4 m_1\kappa _1^2 \eta _2^2 \sum ^{m_X}_{i=1} \left\| {G_{X_i}-G_{0}}\right\| ^2\\&\le 4 \kappa _1^2 \kappa _2^2 \eta _2^2 \left\| {A}\right\| ^4 \left\| {B}\right\| ^4 m_1\sum ^{m_X}_{i=1} \left\| {A^T X_i}\right\| ^2_\infty . \end{aligned}$$

In the first two equalities, we made use of the fact that \(A\Gamma _x A^T \in \mathcal{W}\) and that by definition of an orthogonal projection \(\left\| {V_{X_i}T_{X_i}V_{X_i}^T - P_{\mathcal{W}}V_{X_i}T_{X_i}V_{X_i}^T}\right\| _F \le \left\| {V_{X_i}T_{X_i}V_{X_i}^T - V_0 T_{X_i} V_0^T }\right\| _F\). The remaining inequalities follow directly from the submultiplicativity of \(\left\| {\cdot }\right\| _F\) and \(\left\| {\cdot }\right\| \) combined with the Lipschitz continuity of the activation functions and their derivatives (cf. 3 in Definition 3). Since \(\left\| {a_j}\right\| \le 1\), we can estimate the sub-exponential norm of \(\left\| {A^T X_i}\right\| ^2_\infty = \max _{1 \le j \le m_0} \langle X_i, a_j \rangle ^2\) by

$$\begin{aligned} \left\| {\max _{1 \le j \le m_0} \langle X_i, a_j \rangle ^2}\right\| _{\psi _1}&\le c_1 \log (m_0+1) \max _{1 \le j \le m_0} \left\| {\langle X_i, a_j \rangle ^2}\right\| _{\psi _1} \\&= c_1 \log (m_0+1) \max _{1 \le j \le m_0} \left\| {\langle X_i, a_j \rangle }\right\| _{\psi _2}^2 \le c_1 \log (m_0+1) \left\| {X}\right\| _{\psi _2}^2, \end{aligned}$$

for an absolute constant \(c_1>0\), where we applied [64, Lemma 2.2.2] in the first inequality and used that \(\left\| {Y}\right\| _{\psi _2}^2 = \left\| {Y^2}\right\| _{\psi _1}\) for any scalar random variable Y together with the fact that the sub-Gaussian norm of a vector is defined by \(\left\| {X}\right\| _{\psi _2} = \sup _{x\in \mathbb {S}^{m_0-1}} |\langle x, X\rangle |\) (cf. [65]). The random vectors \(X_i\sim \mu _X\) are i.i.d., which allows us to drop the dependency on i in the last step. The previous bound also guarantees a bound on the expectation, which is due to \(\mathbb {E}[|Y|^p] \le p! \left\| {Y}\right\| _{\psi _1}\) (cf. [64]), namely, for \(p=1\) and \(Y=\max _{1 \le j \le m_0} \langle X, a_j \rangle ^2\)

$$\begin{aligned} \mathbb {E}\left[ \max _{1 \le j \le m_0} \langle X, a_j \rangle ^2\right] \le \left\| {\max _{1 \le j \le m_0} \langle X_i, a_j \rangle ^2}\right\| _{\psi _1} \le c_1 \log (m_0+1) \left\| {X}\right\| _{\psi _2}^2. \end{aligned}$$
(23)

Denote \(Z_i := \left\| {A^T X_i}\right\| ^2_\infty \) for all \(i = 1, \dots , m_X\), then

$$\begin{aligned} \left\| {W - W^*}\right\| _F^2&\le 4 \kappa _1^2 \kappa _2^2 \eta _2^2 \left\| {A}\right\| ^4 \left\| {B}\right\| ^4 m_1\sum ^{m_X}_{i=1} Z_i. \end{aligned}$$
(24)

Therefore, applying the Bernstein inequality for sub-exponential random variables [65, Theorem 2.8.1] to the right sum in (24) yields

$$\begin{aligned} \left\| {W - W^*}\right\| _F^2 \le 4 \kappa _1^2 \kappa _2^2 \eta _2^2 \left\| {A}\right\| ^4 \left\| {B}\right\| ^4 (c_1 m_X m_1\log (m_0+1) \left\| {X}\right\| _{\psi _2}^2 + t), \end{aligned}$$

with probability at least

$$\begin{aligned}&1- 2 \exp \left( -c \min \left( \frac{t^2}{\sum _{i=1}^{m_X} \left\| {Z_i}\right\| ^2_{\psi _1}}, \frac{t}{\max _{i\le m_X}\left\| {Z_i}\right\| _{\psi _1}} \right) \right) \\&\quad \ge 1-2 \exp \left( -c \min \left( \frac{t^2}{m_X(c_1 \log (m_0+1)\left\| {X}\right\| _{\psi _2}^2)^2}, \frac{t}{c_1 \log (m_0+1)\left\| {X}\right\| _{\psi _2}^2} \right) \right) , \end{aligned}$$

for all \(t \ge 0\) and an absolute constant \(c >0\). Then, by choosing \(t = c_1 m_X m_1\log (m_0+1) \left\| {X}\right\| _{\psi _2}^2\), we get

$$\begin{aligned} \left\| {W - W^*}\right\| _F^2 \le 8 \kappa _1^2 \kappa _2^2 \eta _2^2 \left\| {A}\right\| ^4 \left\| {B}\right\| ^4 c_1 m_X m_1\log (m_0+1) \left\| {X}\right\| _{\psi _2}^2 \end{aligned}$$
(25)

with probability at least

$$\begin{aligned} 1- 2 \exp \left( -c \min \left( m_1^2 m_X, m_1m_X \right) \right) = 1- 2 \exp \left( -c m_1m_X\right) . \end{aligned}$$
(26)

From (21), combining (22) and (25) yields

$$\begin{aligned} \left\| {\hat{W} - W^*}\right\| _F&\le \sqrt{m_X} \sup _{x \in B_1^{m_0}}\Vert \Delta ^2 f(x) -\nabla ^2 f(x)\Vert _F \\&\quad + \sqrt{8c_1} \kappa _1 \kappa _2 \eta _2 \left\| {A}\right\| ^2 \left\| {B}\right\| ^2 \left\| {X}\right\| _{\psi _2}\sqrt{m_X m_1\log (m_0+1)}\\&\le \sqrt{m_X} C_{\Delta } \epsilon m_1m_0^{\frac{3}{2}}\\&\quad + \sqrt{8c_1} \kappa _1 \kappa _2 \eta _2 \left\| {A}\right\| ^2 \left\| {B}\right\| ^2 \left\| {X}\right\| _{\psi _2}\sqrt{m_X m_1\log (m_0+1)}, \end{aligned}$$

where we used Lemma 7 in the second inequality, and the result holds at least with the probability given as in (26). Setting \(C:=\sqrt{8c_1} \kappa _1 \kappa _2 \eta _2 >0\) finishes the proof.

Lemma 9

Let \(\mu _X\) be centered with \(\mathrm{supp\ }(\mu _X) \subseteq B^{m_0}_1\). Furthermore, assume that \(f \in \mathcal{F}(m_0, m_0, m_1)\) and that \(\hat{W}\) is given by (19) with step-size \(\epsilon > 0\). If

$$\begin{aligned} \sigma _{m_0+ m_1}\left( \mathbb {E}_{X \sim \mu _X}{\text {vec}}(\nabla ^2 f(X)) \otimes {\text {vec}}(\nabla ^2 f(X))\right) \ge \alpha > 0, \end{aligned}$$

then we have

$$\begin{aligned} \sigma _{m_0+ m_1}(\hat{W}) \ge \sqrt{m_X}\left( \sqrt{\frac{\alpha }{2}} - C_{\Delta }\epsilon m_1m_0^{\frac{3}{2}}\right) , \end{aligned}$$

with probability at least \(1 - (m_0+ m_1)\exp \left( -\frac{m_X \alpha }{8 C_1 \left\| {A}\right\| ^4\left\| {B}\right\| ^4 m_1}\right) \), where \(C_\Delta , C_1 > 0\) depend only on the constants \(\kappa _j, \eta _j\) for \(j=0,\dots ,3\).

Proof

By Weyl’s inequality and re-using (22), we obtain

$$\begin{aligned} \sigma _{m_0+ m_1}(\hat{W}) \ge \sigma _{m_0+ m_1}(W) - \left\| {W - \hat{W}}\right\| \ge \sigma _{m_0+ m_1}(W) - C_{\Delta } \epsilon m_1m_0^{\frac{3}{2}}. \end{aligned}$$
(27)

For the first term of the right-hand side, we have \(\sigma _{m_0+ m_1}(W)^2 = \sigma _{m_0+ m_1}(W W^T)\), which can be written as a sum of the outer products of the columns

$$\begin{aligned} \sigma _{m_0+ m_1}(W W^T)&= \sum _{i=1}^{m_X} {\text {vec}}(\nabla ^2 f(X_i)) \otimes {\text {vec}}(\nabla ^2 f(X_i)). \end{aligned}$$

Additionally, the matrices \({\text {vec}}(\nabla ^2 f(X_i)) \otimes {\text {vec}}(\nabla ^2 f(X_i))\) are independent and positive definite random matrices. The Chernoff bound for the eigenvalues for sums of random matrices, due to Gittens and Tropp [25], applied to the right-hand side of the last equation yields the following lower bound:

$$\begin{aligned} \sigma _{m_0+ m_1}\left( \sum _{i=1}^{m_X} {\text {vec}}(\nabla ^2 f(X_i)) \otimes {\text {vec}}(\nabla ^2 f(X_i))\right) \ge t m_X \alpha \text { for } t \in [0,1], \end{aligned}$$

with probability at least

$$\begin{aligned} 1 - (m_0+ m_1)\exp \left( -(1 - t)^2 \frac{m_X \alpha }{2K}\right) , \end{aligned}$$

where we set \(K = \max _{x \in B^{m_0}_1} \left\| {{\text {vec}}(\nabla ^2 f(x)) \otimes {\text {vec}}(\nabla ^2 f(x))}\right\| \). To estimate K more explicitly, we first have to bound the norm of the Hessian matrices. Let \(X \sim \mu _X\), then

$$\begin{aligned} \left\| {\nabla ^2 f(X)}\right\| _F&\le \sup _{x \in B_1^{m_0}} \left\| {\nabla ^2 f(x)}\right\| _F = \sup _{x \in B_1^{m_0}} \left\| {A \Gamma _x A^T + V_x T_x V_x^T}\right\| _F \\&\le \sup _{x \in B_1^{m_0}} \left\| {A}\right\| ^2 \left( \kappa _2 \Vert B\Vert \sqrt{\sum _{\ell =1}^{m_1} h'_\ell (b_\ell ^T g(A^T x + \theta ))} + \left\| {B}\right\| ^2 \kappa _1^2 \Vert \Gamma _x\Vert _F\right) \\&\le \left\| {A}\right\| ^2 \left( \kappa _2 \Vert B\Vert \eta _2 \sqrt{m_1} + \left\| {B}\right\| ^2 \kappa _1^2 \eta _2 \sqrt{m_1}\right) \le \sqrt{C_1} \left\| {A}\right\| ^2 \left\| {B}\right\| ^2 \sqrt{m_1}, \end{aligned}$$

for some constant \(C_1 > 0\). Now we can further estimate K by

$$\begin{aligned} K&= \max _{x\in B_1^{m_0}}\left\| {{\text {vec}}(\nabla ^2 f(x)) \otimes {\text {vec}}(\nabla ^2 f(x))}\right\| = \max _{x\in B_1^{m_0}}\left\| {{\text {vec}}(\nabla ^2 f(x)) \otimes {\text {vec}}(\nabla ^2 f(x))}\right\| _F\\&= \max _{x\in B_1^{m_0}}\left\| {\nabla ^2 f(x)}\right\| _F^2 \le C_1 \left\| {A}\right\| ^4\left\| {B}\right\| ^4 m_1. \end{aligned}$$

Finally, we can finish the proof by plugging the above into (27) and by setting \(t = \frac{1}{2}\).

Proof of Theorem 5

The proof is a combination of the previous lemmas together with an application of Wedin’s bound [59, 66]. Given \(\hat{W},W^*\), let \(\hat{U} \hat{\Sigma }\hat{V}^T, U^* \Sigma ^* {V^*}^T\) be their respective singular value decompositions. Furthermore, denote by \(\hat{U}_1, U_1^*\) the matrices formed by only the first \(m_0+m_1\) columns of \(\hat{U}, U^*\), respectively. According to this notation, Algorithm 2 returns the orthogonal projection \(P_{\hat{\mathcal{W}}}= \hat{U}_1 \hat{U}_1^T.\) We also denote by \(P_{\mathcal{W}^*}\) the projection given by \(P_{\mathcal{W}^*} = U_1^* {U_1^*}^T.\) Then we can bound the difference of the projections by applying Wedin’s bound

$$\begin{aligned} \left\| {P_{\hat{\mathcal{W}}} -P_{ \mathcal{W}^*}}\right\| _F = \left\| {\hat{U}_1 \hat{U}_1^T -U_1^* {U_1^*}^T}\right\| _F \le \frac{2 \left\| {\hat{W} - W^*}\right\| _F}{\bar{\alpha }}, \end{aligned}$$

as soon as \(\bar{\alpha }>0\) satisfies

$$\begin{aligned} \bar{\alpha }\le \min _{\begin{array}{c} 1<j\le m_0+ m_1\\ m_0+ m_1+1 \le k \end{array}} \left| { \sigma _j(\hat{W})- \sigma _k( W^*) }\right| \text { and }\bar{\alpha } \le \min _{1\le j \le m_0+ m_1} \sigma _{j}(\hat{W}). \end{aligned}$$

Since \( \mathcal{W}\) has dimension \(m_0+ m_1\), we have \(\max _{ k \ge m_0+ m_1+1} \sigma _k(W^*) = 0\). Therefore the second inequality is equivalent to the first, and we can choose \(\bar{\alpha } = \sigma _{m_0+ m_1}(\hat{W}) \le \min _{1\le j \le m_0+ m_1} \sigma _{j}(\hat{W})\). Thus, we end up with the inequality

$$\begin{aligned} \left\| {P_{\hat{\mathcal{W}}} -P_{\mathcal{W}^*}}\right\| _F \le \frac{2 \left\| {\hat{W} - W^*}\right\| _F}{\sigma _{m_0+ m_1}(\hat{W})}. \end{aligned}$$

Applying the union bound for the two events in Lemma 8 and Lemma 9 in combination with the respective inequalities yields

$$\begin{aligned} \left\| {P_{\hat{\mathcal{W}}} -P_{\mathcal{W}^*}}\right\| _F \le \frac{C_{\Delta }\epsilon m_1m_0^{\frac{3}{2}} + C\left\| {A}\right\| ^2 \left\| {B}\right\| ^2 \left\| {X}\right\| _{\psi _2}\sqrt{m_1\log (m_0+1)}}{\sqrt{\frac{\alpha }{2}}- C_{\Delta }\epsilon m_1m_0^{\frac{3}{2}}} \end{aligned}$$

with probability at least \(1 - 2e^{-c m_1m_X} - (m_0+ m_1)e^{ -\frac{\alpha }{8C_1 \left\| {A}\right\| ^4\left\| {B}\right\| ^4 m_1} m_X }\), where \(C,C_1,C_{\Delta }, c > 0\) are the constants from the lemmas above.

3 Recovery of Individual (Entangled) Neural Network Weights

The symmetric rank-1 matrices \(\{a_i \otimes a_i: i \in [m_0] \}\cup \{ v_\ell \otimes v_\ell : \ell \in [m_1] \}\) made of tensors of (entangled) neural network weights are the spanning elements of \(\mathcal{W}\), which in turn can be approximated by \(\hat{\mathcal{W}}\) as has been proved above. In this section, we explain under which conditions it is possible to stably identify approximations to the network profiles \(\{a_i: i \in [m_0] \}\cup \{ v_\ell : \ell \in [m_1] \}\) by a suitable selection process, Algorithm 3.

To simplify notation, we drop the differentation between weights \(a_i\) and \(v_\ell \) and simply denote \(\mathcal{W}= {\text {span}}\left\{ w_1 \otimes w_1,\ldots , w_{m} \otimes w_{m}\right\} \), where \(m = m_0+ m_1\), and every \(w_\ell \) equals either one of the \(a_i\)’s or one of the \(v_\ell \)’s. Thus, m may be larger than d. We also use the notations \(W_j := w_j \otimes w_j\), and \(\hat{W}_j:= P_{\hat{\mathcal{W}}}(W_j)\). Provided that the approximation error \(\delta := \left\| {P_{\mathcal{W}} - P_{\hat{\mathcal{W}}}}\right\| _F\) satisfies \(\delta < 1\) (cf. Theorem 5), \(\{\hat{W}_j: j \in [m]\}\) is the image of a basis under a bijective map and thus can be used as a basis for \(\hat{\mathcal{W}}\) (see Lemma 32 in the Appendix). We quantify the deviation from orthonormality by \(\nu := C_F- 1\), see (9). As an example of suitable frames, normalized tight frames achieve the bounds \(c_f =C_F= m/d\) [4, Theorem 3.1], see also [11]. For instance, for such frames \(m=\lceil 1.2 d \rceil >d \) would allow for \(\nu =0.2\). These finite frames are related to the Thomson problem of spherical equidistribution, which involves finding the optimal way in which to place m points on the sphere \(\mathbb S^{d-1}\) in \(\mathbb {R}^d\) so that the points are as far away from each other as possible. We further note that if \(0<\nu < 1\) then \(\{W_j: j \in [m]\}\) is a system of linearly independent matrices, and therefore a Riesz basis (see Lemma 29 and (47) in the Appendix). We denote the corresponding lower and upper Riesz constants by \(c_r,C_R\).

Finally, for any real, symmetric matrix X, we let \(X = \sum _{j=1}^{d}\lambda _j(X)u_j(X)\otimes u_j(X)\) be the spectral decomposition ordered according to \(\Vert X\Vert =\lambda _1(X) \ge \ldots \ge \lambda _d(X)\) (in case \(\lambda _1(X)= - \Vert X\Vert \), we actually consider \(-X\) instead of X). In the following, we are able to provide in Theorem 11 general recovery guarantees of network weights provided by the eigenvector associated to the largest eigenvalue in absolute value of any suitable matrix \(M \in \hat{\mathcal{W}}\cap \mathbb S\).

Remark 10

The problem considered in this section is how to approximate the individual \(w_\ell \otimes w_\ell \) within the space \(\mathcal{W}\) or more precisely by using its approximation \(\hat{\mathcal{W}}\). As the analysis below is completely unaware of how the space \(\hat{\mathcal{W}}\) has been constructed, in particular it does not rely on the fact that it comes from second-order differentiation of a two hidden layer network, here we are actually implicitly able of addressing also the problem of the identification of weights for one hidden layer networks (2) with a number m of neurons larger than the input dimension d, which was left as an open problem from [23].

3.1 Recovery Guarantees

The network profiles \(\{w_j, j \in [m] \}\) are (up to sign) uniquely defined by matrices \(\{W_j: j \in [m] \}\) as they are precisely the eigenvectors corresponding to the unique nonzero eigenvalue. Therefore, it suffices to recover \(\{W_j: j \in [m] \}\), and we have to study when such matrices can be uniquely characterized within the matrix space \(\mathcal{W}\) by their rank-1 property. Let us stress that this problem is strongly related to similar and very relevant ones appearing recently in the literature addressing nonconvex programs to identify sparse vectors and low-rank matrices in linear subspaces, see, e.g., in [45, 49]. In Appendix 2 (Lemma 30 and Corollary 31), we prove that unique identification is possible if any subset of \(\lceil m/2\rceil + 1\) vectors of \(\{w_j: j \in [m] \}\) is linearly independent and that such subset linear independence is actually implied by the frame bounds (9) if \(\nu =C_F-1 < \lceil \frac{m}{2}\rceil ^{-1}\). Unfortunately, this assumption seems a bit too restrictive in our scenario; hence, we instead resort to a weaker and robust version given by the following result. In particular, we prove that any near rank-1 matrix in \(\hat{\mathcal{W}}\) of unit Frobenius norm is not too far from one of the \(W_j\)’s, provided that \(\delta \) and \(\nu \) are small.

Theorem 11

Let \(M \in \hat{\mathcal{W}}\cap \mathbb {S}\) and assume \(\max \{\delta ,\nu \}\le 1/4\). If \(\lambda _1(M) > \max \{2\delta , \lambda _2(M)\}\) then

$$\begin{aligned} \min _{\begin{array}{c} j=1,\ldots ,m,\\ s \in \{-1,1\} \end{array}}\left\| {s w_{j} - u_1(M)}\right\| \le \sqrt{8}\frac{c_r^{-1/2}\sqrt{\nu } + \nu + 2\delta }{\lambda _1(M) - \lambda _2(M)}. \end{aligned}$$
(28)

Before proving Theorem 11, we need the following technical result.

Lemma 12

For any \(M = \sum _{j=1}^{m}\sigma _j \hat{W}_j\in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(\lambda _1(M) \ge \delta /(1-\delta )\) we have \(\max _{i}\sigma _i \ge 0\).

Proof

Assume, to the contrary, that \(\max _{j}\sigma _j < 0\), and denote \(Z = \sum _{j=1}^{m} \sigma _j W_j\) with \(M = P_{\hat{\mathcal{W}}}(Z)\). Z is negative definite, since \(v^TZv = \sum _{j=1}^{m}\sigma _j \left\langle w_j, v\right\rangle ^2\), and \(\sigma _j < 0\) for all \(i=1,\ldots ,m\). Moreover, we have \(\left\| {Z}\right\| _F \le (1-\delta )^{-1}\) by Lemma 32, and thus, we get a contradiction by

$$\begin{aligned} \frac{\delta }{1-\delta } \le \lambda _1(M) \le \lambda _1(Z) + \left\| {M - Z}\right\| _F < \left\| {M-Z}\right\| _F \le \frac{\delta }{1-\delta }. \end{aligned}$$

Proof of Theorem 11

Let \(\lambda _1 := \lambda _1(M)\), \(u_1 := u_1(M)\) for short in this proof. We can represent M in terms of the basis elements of \(\hat{\mathcal{W}}\) as \(M = \sum _{j=1}^{m}\sigma _j \hat{W}_j\), and let \(Z \in \mathcal{W}\) satisfy \(M = P_{\hat{\mathcal{W}}}(Z)\). Furthermore, let \(\sigma _{j^*} = \max _j \sigma _j \ge 0\) where the nonnegativity follows from Lemma 12. Using \(Z = \sum _{j=1}^{m}\sigma _j w_j\otimes w_j\) and \(\left\| {Z}\right\| _F \le (1-\delta )^{-1}\), we first notice that

$$\begin{aligned} \begin{aligned} \lambda _1&= \left\langle M, u_1 \otimes u_1\right\rangle = \left\langle Z, u_1 \otimes u_1\right\rangle + \left\langle M-Z, u_1 \otimes u_1\right\rangle \\&\le \sum _{j=1}^{m}\sigma _j \left\langle w_j, u_1\right\rangle ^2 + \left\| {M - Z}\right\| _F \le \sigma _{j^*} C_F+ 2\delta \le \sigma _{j^*} + \nu + 2\delta , \end{aligned} \end{aligned}$$
(29)

and

$$\begin{aligned} \begin{aligned} \lambda _1&= \left\langle M, u_1 \otimes u_1\right\rangle \ge \max _j \left\langle Z, w_j \otimes w_j\right\rangle - 2\delta \ge \sigma _{j^*} + \sum _{i \ne j^*}\sigma _i \left\langle w_i, u_1\right\rangle ^2 - 2 \delta \\&\ge \sigma _{j^*} - \left\| {\sigma }\right\| _{\infty }\nu - 2 \delta \ge \sigma _{j^*} -2\nu - 2 \delta , \end{aligned} \end{aligned}$$

where we used \(\left\| {\sigma }\right\| _{\infty }\le (1-\delta )^{-1}(1-\nu )^{-1} \le 2\) according to Lemma 33. Hence \(\left| {\lambda _1 - \sigma _{j^*}}\right| \le 2\delta + 2\nu \). Define now \(Q := \mathsf {Id}- u_1 \otimes u_1\). Choosing \(s \in \{-1, 1\}\) so that \(s\left\langle w_{j^*},u_1\right\rangle \ge 0\), we can bound the left hand side in (28) by

$$\begin{aligned} \left\| {s w_{j^*} - u_1}\right\| ^2&= 2(1 - \left\langle s w_{j^*},u_1\right\rangle ) \le 2(1 - \left\langle w_{j^*},u_1\right\rangle ^2) = 2\left\| {Qw_{j^*}}\right\| ^2 =2\left\| {Q W_{j^*}}\right\| _F^2. \end{aligned}$$

Viewing \(W_{j^*} = w_{j^*}\otimes w_{j^*}\) as the orthogonal projection onto the eigenspace of the matrix \(\lambda _1 W_{j^*}\), corresponding to eigenvalues in \([\infty , \lambda _1]\), we can use Davis-Kahans Theorem in the version of [6, Theorem 7.3.1] to further obtain

$$\begin{aligned} \left\| {s w_{j^*} - u_1}\right\|&\le \sqrt{2}\left\| {Q W_{j^*}}\right\| _F \le \sqrt{2}\frac{\left\| {Q(\lambda _1 W_{j^*} - M)W_{j^*}}\right\| _F}{\lambda _1 - \lambda _2}\nonumber \\&\le \sqrt{2}\frac{\left\| {(\lambda _1 W_{j^*} - M)W_{j^*}}\right\| _F}{\lambda _1 - \lambda _2}. \end{aligned}$$
(30)

To bound the numerator, we first use \( \left\| {Z - M}\right\| _F \le \delta /(1-\delta )\) in the decomposition

$$\begin{aligned} \left\| {(\lambda _1 W_{j^*} - M)W_{j^*}}\right\| _F\le & {} \left\| {(\lambda _1 W_{j^*} - Z)W_{j^*}}\right\| _F + \left\| {Z - M}\right\| _F \\\le & {} \left\| {(\lambda _1 W_{j^*} - Z)W_{j^*}}\right\| _F +\frac{\delta }{1-\delta }, \end{aligned}$$

and then bound the first term using \(\left| {\lambda _1 - \sigma _{j^*}}\right| \le 2\delta + 2\nu \) and the frame property (9) by

$$\begin{aligned} \left\| {(\lambda _1 W_{j^*} - Z)W_{j^*}}\right\| _F&= \left\| {(\lambda _1 - \sigma _{j^*})W_{j^*} + \sum \limits _{j\ne j^*}\sigma _j (w_j \otimes w_j)W_{j^*}}\right\| _F\\&\le \left| {\lambda _1 - \sigma _{j^*}}\right| + \left\| {\sum \limits _{j\ne j^*}\sigma _j \left\langle w_{j^*}, w_j\right\rangle w_{j^*}\otimes w_j}\right\| _F \\&\le 2\delta + 2\nu + \sum \limits _{j\ne j^*}\left| {\sigma _j}\right| \left| {\left\langle w_{j^*}, w_j\right\rangle }\right| \\&\le 2\delta + 2\nu + \left\| {\sigma }\right\| _2 \sqrt{\sum \limits _{j\ne j^*}\left\langle w_{j^*}, w_j\right\rangle ^2} \le 2\delta + 2\nu + \left\| {\sigma }\right\| _2 \sqrt{\nu }. \end{aligned}$$

Combining these estimates with (30) and \(\delta /(1-\delta )\le 2\delta \), we obtain

$$\begin{aligned} \left\| {sw_{j^*} - u_1(M)}\right\| \le \sqrt{2}\frac{2\delta + 2\nu + \left\| {\sigma }\right\| _2 \sqrt{\nu } + 2\delta }{\lambda _1 - \lambda _2} = \sqrt{2}\frac{\left\| {\sigma }\right\| _2 \sqrt{\nu } + 2\nu + 4\delta }{\lambda _1 - \lambda _2} \end{aligned}$$

The result follows since \(\{w_j \otimes w_j: j \in [m]\}\) is a Riesz basis and thus \(\left\| {\sigma }\right\| _2 \le c_r^{-1/2}\left\| {Z}\right\| _F \le 2c_r^{-1/2}\).

The preceding result provides recovery guarantees for network weights provided by the eigenvector associated to the largest eigenvalue in absolute value of any suitable matrix \(M \in \hat{\mathcal{W}}\cap \mathbb S\). The estimate is inversely proportional to the spectral gap \(\lambda _1(M) - \lambda _2(M)\). The problem then becomes the constructive identification of matrices M belonging to \(\hat{\mathcal{W}}\cap \mathbb S\), which simultaneously maximize the spectral gap. Inspired by the results in [23], we propose to consider the following nonconvex program as selector of such matrices

$$\begin{aligned} M = \arg \max \left\| {M}\right\| \quad \text {s.t.}\quad M \in \hat{\mathcal{W}},\quad \left\| {M}\right\| _F \le 1. \end{aligned}$$
(31)

By maximizing the spectral norm under a Frobenius norm constraint, a local maximizer of the program should be as nearly rank one as possible within a given neighborhood. Moreover, if rank one matrices exist in \(\hat{\mathcal{W}}\), these are precisely the global optimizers.

3.2 A Nonlinear Program: Properties of Local Maximizers of (31)

In this section, we prove that, except for spurious cases, local maximizers of (31) are generically almost rank-1 matrices in \(\hat{\mathcal{W}}\). In particular, we show that local maximizers either satisfy \(\left\| {M}\right\| ^2 \ge 1 - c\delta - c'\nu \), for some small constants \(c, c'\), implying near minimal rankness, or \(\left\| {M}\right\| ^2 \le c\delta + c' \nu \), i.e., all eigenvalues of M are small, the mentioned spurious cases. Before addressing these estimates, we provide a characterization of the first- and second-order optimality conditions for (31), see [23] and also [61, 62].

Theorem 13

(Theorem 3.4 in [23]) Let \(M \in \hat{\mathcal{W}}\cap \mathbb {S}\) and assume there exists a unique \(i^* \in [d]\) satisfying \(\left| {\lambda _{i^*}(M)}\right| = \left\| {M}\right\| \). If M is a local maximizer (31); then, it fulfills the stationary or first order optimality condition

$$\begin{aligned} u_{i^*}(M)^T X u_{i^*}(M) = \lambda _{i^*}(M) \left\langle X, M\right\rangle \end{aligned}$$
(32)

for all \(X \in \hat{\mathcal{W}}\). A stationary point M (in the sense that M fulfills (32)) is a local maximizer of (31) if and only if for all \(X \in \hat{\mathcal{W}}\)

$$\begin{aligned} 2 \sum \limits _{k\ne i^*} \frac{(u_{i^*}(M)^T X u_k(M))^2}{\left| {\lambda _{i^*}(M) - \lambda _k(M)}\right| } \le \left| {\lambda _{i^*}(M)}\right| \left\| {X - \left\langle X, M\right\rangle M}\right\| _F^2. \end{aligned}$$
(33)

Proof

The statement requires minor modification of [23, Theorem 3.4] and the proof follows along analogous lines. For reader’s convenience, we give self-contained proof of the statement below, with some key computations borrowed from [23].

For simplicity, we drop the argument M in \(\lambda _{i}\), \(u_i\), and without loss of generality we assume \(\lambda _{i^*} = \left\| {M}\right\| \), otherwise we consider \(-M\). Following the analysis in [23], for \(X \in \hat{\mathcal{W}}\cap \mathbb {S}\) we can consider the function

$$\begin{aligned} f_{X}(\alpha ) = \frac{\left\| {M + \alpha X}\right\| }{\left\| {M + \alpha X}\right\| _F}, \end{aligned}$$

because M is a local maximizer if and only if \(\alpha = 0\) is a local maximizer of \(f_{X}\) for all \(X \in \hat{\mathcal{W}}\cap \mathbb {S}\).

Let us consider \(X \in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(X\perp M\) first. We note that the simplicity of \(\lambda _{i^*}\) implies that there exist analytic functions \(\lambda _{i^*}(\alpha )\) and \(u_{i^*}(\alpha )\) with \((M+\alpha X)u_{i^*}(\alpha ) = \lambda _{i^*}(\alpha ) u_{i^*}(\alpha )\) for all \(\alpha \) in a neighborhood around 0 [40, 50]. Therefore we can use a Taylor expansion \(\left\| {M+\alpha X}\right\| = \lambda _{i^*} + \lambda '_{i^*}(0)\alpha + \lambda ''_{i^*}(0)\alpha ^2/2 + \mathcal{O}(\alpha ^3)\) and combine it with \(\left\| {M+\alpha X}\right\| _F = \sqrt{1 + \alpha ^2} = 1 - \alpha ^2/2 + \mathcal{O}(\alpha ^4)\) to get

$$\begin{aligned} f_X(\alpha ) = \left( 1 - \alpha ^2/2\right) \left( \lambda _{i^*} + \lambda '_{i^*}(0)\alpha + \lambda ''_{i^*}(0)\alpha ^2/2\right) + \mathcal{O}(\alpha ^3) \quad \text {as }\quad \alpha \rightarrow 0. \end{aligned}$$

Differentiating once, we get \(f_X'(0) = \lambda '_{i^*}(0)\); hence, \(\alpha = 0\) is a stationary point if and only if \(\lambda '_{i^*}(0)\) vanishes. Following the computations in [23], we find that \(\lambda '_{i^*}(0) = u_{i^*}(0)^T X u_{i^*}(0) = 0\), and thus, (32) follows for any \(X \perp M\). For general X, we split \(X = \left\langle X, M\right\rangle M + X_{\perp }\), and get \(u_{i^*}(0)^T X u_{i^*}(0) = \left\langle X, M\right\rangle u_{i^*}(0)^T M u_{i^*}(0) = \lambda _{i^*}(0)\left\langle X, M\right\rangle \).

For (33), we have to check additionally \(f''_X(\alpha ) \le 0\). The second derivative of \(f_X(\alpha )\) at zero is given by \(f_X''(0) = \lambda ''_{i^*}(0) - \lambda _{i^*}(0)\); hence, the condition for attaining a local maximum is \(\lambda ''_{i^*}(0) \le \lambda _{i^*}(0)\). Again, we can follow the computations in [23] to obtain

$$\begin{aligned} \lambda ''_{i^*}(0) = 2 \sum _{k\ne i^*}\frac{(u_{i^*}^T(0) X u_{k}(0))^2}{\left| {\lambda _{i^*}(0)-\lambda _k(0)}\right| }, \end{aligned}$$

and (33) follows immediately for any \(X \perp M\), \(\left\| {X}\right\| _F = 1\). For general X, we decompose it into \(X = \left\langle X, M\right\rangle M + X_{\perp }\). Since \(u_{i^*}^T(0) M u_{k}(0) = 0\) for all \(k\ne i^*\), we get

$$\begin{aligned}&2 \sum _{k\ne i^*}\frac{(u_{i^*}^T(0) \left( \left\langle X, M\right\rangle M + X_{\perp }\right) u_{k}(0))^2}{\left| {\lambda _{i^*}(0)-\lambda _k(0)}\right| }\\&\quad \!=\!2\left\| {X_{\perp }}\right\| _F^2\sum _{k\ne i^*}\frac{\left( u_{i^*}^T(0)\!\left( \frac{X_{\perp }}{\left\| {X_{\perp }}\right\| _F}\right) \! u_{k}(0)\right) ^2}{\left| {\lambda _{i^*}(0)-\lambda _k(0)}\right| } \!\le \! \lambda _{i^*}(0)\left\| {X_{\perp }}\right\| _F^2, \end{aligned}$$

and the result follows from \(\left\| {X_{\perp }}\right\| _F = \left\| {X - \left\langle X, M\right\rangle M}\right\| _F\).

For simplicity, we denote \(u_i := u_i(M)\) and \(\lambda _i = \lambda _i(M)\) throughout the rest of this section. Moreover, we assume M satisfies

  1. (A1)

    \(\lambda _1 = \left\| {M}\right\| \) (this is without loss of generality because \(-M\) and M may be both local maximizers),

  2. (A2)

    \(\lambda _1 > \lambda _2\). (This is a useful technical condition in order to use the second-order optimality condition (33).)

To derive the bounds for \(\lambda _1\), we establish an inequality \(0\le \lambda _1^2(\lambda _1^2 - 1) + c\delta + c'\nu \), which implies that \(\lambda _1^2(M)\) is either close to 0 or close to 1. A first ingredient for obtaining the inequality is

$$\begin{aligned} \left\| {\hat{W}_ju_1}\right\| _2^2 \ge u_1^T \hat{W}_j u_1 - 2\delta = \lambda _1\left\langle \hat{W}_j, M\right\rangle - 2 \delta , \end{aligned}$$
(34)

where we used \(\left| {\Vert \hat{W}_ju_1\Vert ^2 - u_1^T \hat{W}_j u_1}\right| \le 2\delta \) in the inequality, see Lemma 33 in Appendix 2, and (32) in the equality. The other useful technical estimate is provided in the following Lemma, which is proven by leveraging the second order optimality condition (33).

Lemma 14

Assume that M is a local maximizer satisfying (A1) and (A2) and let \(\max \{\delta , \nu \} < 1/4\). For any \(X \in \hat{\mathcal{W}}\) with \(\left\| {X}\right\| _F \le 1\) we have

$$\begin{aligned} \left\| {Xu_1}\right\| _2^2 \le \lambda _1^2\frac{1 + \left\langle X, M\right\rangle ^2}{2} +5\delta + 2\nu . \end{aligned}$$
(35)

For the proof of Lemma 14, we need a lower bound for the smallest eigenvalue (see Appendix 2 for the proof of Lemma 15).

Lemma 15

Assume that M is a stationary point of (31) satisfying (A1) and (A2). If \(\max \{\delta , \nu \} < 1/4\), then \(\lambda _D \ge -2\delta \lambda _1^{-1} - 8\delta - 4\nu \).

Proof of Lemma 14

We first use (32) and (33) to get

$$\begin{aligned} \frac{2}{\lambda _1 - \lambda _D}\left( \left\| {Xu_1}\right\| _2^2 -\lambda _1^2\left\langle X, M\right\rangle ^2\right)&= \frac{2}{\lambda _1 - \lambda _D}\left( \left\| {Xu_1}\right\| _2^2 -\left\langle Xu_1, u_1\right\rangle ^2\right) \\&= \frac{2}{\lambda _1 - \lambda _D} \sum \limits _{i=2}^{D}\left\langle Xu_1, u_k\right\rangle ^2\\&\le 2\sum \limits _{i=2}^{D}\frac{(u_1^T X u_k)^2}{\lambda _1 - \lambda _k} \le \lambda _1 \left\| {X - \left\langle X, M\right\rangle M}\right\| _F^2, \end{aligned}$$

and then rearrange the inequality to obtain

$$\begin{aligned} \left\| {Xu_1}\right\| _2^2&\le \frac{\lambda _1(\lambda _1 - \lambda _D)}{2}\left( \left\| {X}\right\| _F^2 - \left\langle X, M\right\rangle ^2\right) + \lambda _1^2\left\langle X, M\right\rangle ^2 \\&\le \frac{\lambda _1(\lambda _1 - \lambda _D)}{2} +\frac{\lambda _1(\lambda _1 + \lambda _D)}{2}\left\langle X, M\right\rangle ^2\\&= \lambda _1^2\frac{1 + \left\langle X, M\right\rangle ^2}{2} - \lambda _1\lambda _D \frac{1-\left\langle X, M\right\rangle ^2}{2}. \end{aligned}$$

Using the lower bound for \(\lambda _D\) from Lemma 15, and \(\lambda _1\le 1\), we get

$$\begin{aligned} \left\| {Xu_1}\right\| _2^2&\le \lambda _1^2\frac{1 + \left\langle X, M\right\rangle ^2}{2} +\lambda _1 \left( 2\delta \lambda _1^{-1} + 8\delta + 4\nu \right) \frac{1-\left\langle X, M\right\rangle ^2}{2}\\&\le \lambda _1^2\frac{1 + \left\langle X, M\right\rangle ^2}{2} + (10\delta + 4\nu ) \frac{1-\left\langle X, M\right\rangle ^2}{2}= \lambda _1^2\frac{1 + \left\langle X, M\right\rangle ^2}{2} + 5\delta + 2\nu . \end{aligned}$$

By combining (34) and (35), the bounds for \(\lambda _1\) follow.

Theorem 16

Assume that M is a local maximizer of (31) satisfying 3.2 and 3.2, and assume \(38\delta + 13\nu < 1/4\). Then we have \(\lambda _1^2 \ge 1- 38\delta - 13\nu \) or \(\lambda _1^2 \le 38\delta + 13\nu .\)

Proof

Let \(j^* = \arg \max _j \sigma _j\). We first note that we can assume \(\sigma _{j^*} \ge 0\) without loss of generality by Lemma 12, since there is nothing to show if \(\lambda _1\le 2\delta \). Now we consider (34) and (35) for \(X = \hat{W}_{j^*}\) to get the inequality

$$\begin{aligned} \begin{aligned}&\lambda _1^2\frac{1 + \left\langle \hat{W}_{j^*}, M\right\rangle ^2}{2} +5\delta + 2\nu \ge \lambda _1\left\langle \hat{W}_{j^*}, M\right\rangle - 2 \delta ,\\&\quad \text{ or, } \text{ equivalently, } 0\le \lambda _1^2 - 1 + \left( 1 - \lambda _1\left\langle \hat{W}_j, M\right\rangle \right) ^2 + 14\delta + 4\nu \\&\quad \text{ or, } \text{ equivalently, } 0\le \lambda _1^2 - 1 + \left( 1 - \lambda _1 \sigma _{j^*} \left\| {\hat{W}_j}\right\| _F^2 + \lambda _1 \left( \sigma _{j^*} \left\| {\hat{W}_j}\right\| _F^2 - \left\langle \hat{W}_j, M\right\rangle \right) \right) ^2 \\&\quad + 14\delta + 4\nu . \end{aligned} \end{aligned}$$
(36)

We separate two cases. In the first case, we have \(\sigma _{j^*} > 1\), which implies \(\langle \hat{W}_j, M\rangle > 1 - 5\delta - 2\nu \) and thus \(\langle W_j, M\rangle > 1 - 6\delta - 2\nu \) by Lemma 33 and \(\max \{\delta ,\nu \} < 1/4\). Since \(\langle W_j, M\rangle = w_j^T M w_j\), this implies \(\lambda _1 > 1 - 6\delta - 2\nu \), i.e., the result is proven. We continue with the case \(\sigma _{j^*} \le 1\), which implies \(\lambda _1 \sigma _{j^*} \Vert \hat{W}_j\Vert _F^2 \le 1\). Using Lemma 33 to bound \(\sigma _{j^*} \Vert \hat{W}_j\Vert _F^2 - \langle \hat{W}_j, M\rangle \), \(\lambda _1 < 1\) and \(\Vert \hat{W}_j\Vert _F^2 \ge 1 - 2\delta \), the last inequality in (36) implies

$$\begin{aligned} 0\le \lambda _1^2 - 1 + \left( 1 - \lambda _1 \sigma _{j^*} + 6\delta + 2\nu \right) ^2 + 14\delta + 4\nu . \end{aligned}$$
(37)

Furthermore, by following the computation we performed for (29), we get \(\sigma _{j^*} \ge \lambda _1 - \nu - 2\delta \), and inserting it in (37) we obtain

$$\begin{aligned}&0\le \lambda _1^2 - 1 + \left( 1 - \lambda _1^2 + 8\delta + 3\nu \right) ^2 + 14\delta + 4\nu ,\\&\quad \text{ implying } 0 \le \lambda _1^2\left( \lambda _1^2 - 1\right) + 38\delta + 13\nu . \end{aligned}$$

Provided that \(38\delta + 13\nu < 1/4\), this quadratic inequality (in the unknown \(\lambda _1^2\)) has solutions \(\lambda _1^2 \ge 1 - 38\delta - 13\nu \), or \(\lambda _1^2 \le 38\delta + 13\nu \).

In Sect. 3.2, we analyze local maximizers of (31) and show that there exist small constants \(c,c'\) such that either \(\left\| {M}\right\| ^2 \ge 1 - c\delta + c'\nu \), or \(\left\| {M}\right\| ^2 \le c\delta + c'\nu \). Therefore, a local maximizer of (31) is either almost rank-1 or it has its energy distributed across many eigenvalues. This criterion can be easily checked in practice, and therefore maximizing (31) is a suitable approach for finding near rank-1 matrices in \(\hat{\mathcal{W}}\). In this section, we show how those individual symmetric rank-1 tensors can be approximated by a simple iterative algorithm, Algorithm 3, making exclusive use of the projection \(P_{\hat{\mathcal{W}}}\). Algorithm 3 strives to solve the nonconvex program (31), by iteratively increasing the spectral norm of its iterations. Our approach is closely related to the projected gradient ascent iteration [23, Algorithm 4.1], but we introduce some modifications, in particular we exchange the order of the normalization and the projection onto \(\hat{\mathcal{W}}\). The proof of convergence of [23, Algorithm 4.1] takes advantage of that different ordering of these operations to address the case where \(\mathcal{W}\) is spanned by at most \(m \le d\) rank-1 matrices formed as tensors of nearly orthonormal vectors (after whitening). In fact, its analysis is heavily based on approximated singular value or spectral decompositions. Unfortunately in our case, the decomposition \({M=\sum _{j=1}^m \sigma _j w_j \otimes w_j}\) does not approximate the singular value or spectral decomposition since the \(w_j\)’s are redundant (they form a frame) and therefore are not properly nearly orthonormal in the sense required in [23].

Algorithm 3 is based on the iterative application of the operator \(F_{\gamma }\) defined by

$$\begin{aligned} F_{\gamma }(X) := P_{\mathbb {S}}\circ P_{\hat{\mathcal{W}}}(X + \gamma u_1(X) \otimes u_1(X)), \end{aligned}$$

with \(\gamma >0\) and \(P_{\mathbb {S}}\) as the projection onto the sphere \(\mathbb {S}= \{X : \left\| {X}\right\| _F = 1\}\). The following Lemma shows that, if \(\lambda _1(X) > 0\), the operator \(F_{\gamma }\) is well-defined, in the sense that it is a single-valued operator.

figure c

Lemma 17

Let \(X\in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(\lambda _1(X) > 0\) and \(\gamma >0\). Then \(\Vert P_{\hat{\mathcal{W}}}(X + \gamma u_1(X) \otimes u_1(X))\Vert _F^2 = 1 + 2\gamma \lambda _1(X)+ \gamma ^2 \left\| {P_{\hat{\mathcal{W}}}(u_1(X)\otimes u_1(X))}\right\| _F^2\). In particular, \(F_{\gamma }(X)\) is well-defined and can be explicitly expressed as

$$\begin{aligned} F_{\gamma }(X) = \frac{P_{\hat{\mathcal{W}}}(X + \gamma u_1(X) \otimes u_1(X))}{\left\| {P_{\hat{\mathcal{W}}}(X + \gamma u_1(X) \otimes u_1(X))}\right\| _F}. \end{aligned}$$

Proof

The result follows from \(\left\langle X, P_{\hat{\mathcal{W}}}(u_1(X)\otimes u_1(X))\right\rangle = \lambda _1(X)\) and computing explicitly the squared norm \(\left\| {P_{\hat{\mathcal{W}}}(X + \gamma u_1(X) \otimes u_1(X))}\right\| _F^2\).

We analyze next the sequence \((M_j)_{j \in \mathbb {N}}\) generated by Algorithm 3. We show that \((\lambda _1(M_j))_{j \in \mathbb {N}}\) is a strictly monotone increasing sequence, converging to a well-defined limit \(\lambda _{\infty }=\lim _{j\rightarrow \infty }\lambda _1(M_j)\), and, if \(\lambda _1(M_j) > 1/\sqrt{2}\) for some j, all convergent subsequences of \((M_j)_{j \in \mathbb {N}}\) converge to fixed points of \(F_{\gamma }\). Moreover, we prove that such fixed points satisfy (32) and are thus stationary points of (31). We begin by providing two equivalent characterizations of (32).

Lemma 18

For \(M \in \hat{\mathcal{W}}\) and \(c \ne 0\), we have

$$\begin{aligned} v^T X v = c \left\langle X,M\right\rangle \quad \text { for all } X \in \hat{\mathcal{W}}\quad \text { if and only if }\quad M = c^{-1} P_{\hat{\mathcal{W}}}(v \otimes v). \end{aligned}$$

Proof

Assume that \(v^T X v = c \left\langle X,M\right\rangle \) for all X. We notice that the assumption is equivalent to \(\left\langle X, v\otimes v - c M\right\rangle = 0\) for all \(X\in \hat{\mathcal{W}}\). Therefore \(P_{\hat{\mathcal{W}}}(v\otimes v - c M) = 0\), and the result follows from \(M \in \hat{\mathcal{W}}\). In the case where \(M = c^{-1} P_{\hat{\mathcal{W}}}(v \otimes v)\), we compute \(c\left\langle X, M\right\rangle = \left\langle X, P_{\hat{\mathcal{W}}}(v \otimes v)\right\rangle = v^T X v\) since \(X\in \hat{\mathcal{W}}\).

Lemma 19

Let \(X \in \hat{\mathcal{W}}\cap \mathbb {S}\). We have \(\left\| {P_{\hat{\mathcal{W}}}(u_j(X)\otimes u_j(X))}\right\| _F \ge \left| {\lambda _j(X)}\right| \) with equality if and only if \(X = \lambda _j(X)^{-1}P_{\hat{\mathcal{W}}}(u_j(X)\otimes u_j(X))\).

Proof

We drop the argument X for \(\lambda _j(X)\) and \(u_j(X)\) for simplicity. We first calculate

$$\begin{aligned} \left\| {P_{\hat{\mathcal{W}}}(u_j \otimes u_j)}\right\| _F&= \left\| {P_{\hat{\mathcal{W}}}(u_j \otimes u_j)}\right\| _F\left\| {X}\right\| _F \ge \left| {\left\langle P_{\hat{\mathcal{W}}}(u_j \otimes u_j), X\right\rangle }\right| \nonumber \\&= \left| {\left\langle u_j \otimes u_j, X\right\rangle }\right| = \left| {\lambda _j}\right| . \end{aligned}$$
(38)

Moreover, we have equality if and only if \(\left\| {P_{\hat{\mathcal{W}}}(u_j \otimes u_j)}\right\| _F = \left| {\lambda _j}\right| \), hence (38) is actually a chain of equalities. Specifically,

$$\begin{aligned} \left\| {P_{\hat{\mathcal{W}}}(u_j \otimes u_j)}\right\| _F\left\| {X}\right\| _F = \left| {\left\langle P_{\hat{\mathcal{W}}}(u_j \otimes u_j), X\right\rangle }\right| , \end{aligned}$$

which implies \(X = c P_{\hat{\mathcal{W}}}(u_j \otimes u_j)\) for some scalar c. Since \(\left\| {X}\right\| _F = 1\), \(c= \lambda _j^{-1}\) follows from

$$\begin{aligned} 1 = \left\langle c P_{\hat{\mathcal{W}}}(u_j \otimes u_j), X\right\rangle = c \left\langle u_j \otimes u_j, X\right\rangle = c \lambda _j. \end{aligned}$$

Lemmas 18 and 19 show that the stationary point condition (32) for M with \(\left\| {M}\right\| = \left| {\lambda _{i^*}(M)}\right| \) and isolated \(\lambda _{i^*}\) is equivalent to both

$$\begin{aligned} M = \lambda _{i^*}^{-1}P_{\hat{\mathcal{W}}}(u_{i^*}(M) \otimes u_{i^*}(M)),\quad \text {and }\quad \left\| {P_{\hat{\mathcal{W}}}(u_{i^*}(M)\otimes u_{i^*}(M))}\right\| _F = \left| {\lambda _{i^*}(X)}\right| . \end{aligned}$$

A similar condition appears naturally if we characterize the fixed points of \(F_{\gamma }\).

Lemma 20

Let \(\gamma > 0\) and \(X \in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(\lambda _1(X) > 0\). Then we have

$$\begin{aligned} 0< \lambda _1(X) < \left\| {P_{\hat{\mathcal{W}}}(u_1(X) \otimes u_1(X))}\right\| _F\quad&\mathrm{if~and~only~if}\quad \lambda _1(F(X)) > \lambda _1(X), \end{aligned}$$
(39)
$$\begin{aligned} \lambda _1(X) = \left\| {P_{\hat{\mathcal{W}}}(u_1(X) \otimes u_1(X))}\right\| _F\quad&\mathrm{if~and~only~if}\quad F_{\gamma }(X) = X. \end{aligned}$$
(40)

Proof

For simplicity, we denote \(u:=u_1(X)\) and \(\lambda = \lambda _1(X)\) in this proof. We first prove that \(0< \lambda < \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F\) implies \( \lambda _1(F(X)) > \lambda \). It suffices to show that there exists any unit vector v such that \(v^T F_{\gamma }(X) v> \lambda \). In particular, we can test \(F_{\gamma }(X)\) with \(v=u\), which yields the identity

$$\begin{aligned} u^T F_{\gamma }(X) u - \lambda&= \left\| {P_{\hat{\mathcal{W}}}(X + \gamma u \otimes u)}\right\| _F^{-1}\left\langle P_{\hat{\mathcal{W}}}(X + \gamma u \otimes u), u\otimes u\right\rangle - \lambda \\&=\left\| {P_{\hat{\mathcal{W}}}(X + \gamma u \otimes u)}\right\| _F^{-1}\left( \left\langle X, u\otimes u\right\rangle + \gamma \left\langle P_{\hat{\mathcal{W}}}(u \otimes u), u\otimes u\right\rangle \right) - \lambda \\&=\left\| {P_{\hat{\mathcal{W}}}(X + \gamma u \otimes u)}\right\| _F^{-1}\left( \lambda + \gamma \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F^2\right) - \lambda \\&=\frac{\lambda \left( 1 - \left\| {P_{\hat{\mathcal{W}}}(X + \gamma u \otimes u)}\right\| _F \right) + \gamma \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F^2}{\left\| {P_{\hat{\mathcal{W}}}(X + \gamma u \otimes u)}\right\| _F}. \end{aligned}$$

By using now \(\lambda < \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F\), we can bound

$$\begin{aligned}&1 - \left\| {P_{\hat{\mathcal{W}}}(X + \gamma u\otimes u)}\right\| _F = 1 - \sqrt{\left\| {P_{\hat{\mathcal{W}}}(X + \gamma u\otimes u)}\right\| _F^{2}} \\&\quad = 1 - \sqrt{\left\| {X + \gamma P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F^2} \\&\quad =1 - \sqrt{1 + \gamma ^2\left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F^2 + 2\gamma \left\langle X, P_{\hat{\mathcal{W}}}(u \otimes u)\right\rangle }\\&\quad =1 - \sqrt{1 + \gamma ^2\left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F^2 + 2\gamma \lambda }\\&\quad >1 - \sqrt{1 + \gamma ^2\left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F^2 + 2\gamma \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F} \\&\quad = 1 - \sqrt{\left( 1 + \gamma \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F\right) ^2}\\&\quad = - \gamma \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F. \end{aligned}$$

Inserting this inequality in the previous identity, we obtain the wished result by

$$\begin{aligned} \begin{aligned} u F_{\gamma }(X)u - \lambda&=\frac{\lambda \left( 1 - \left\| {P_{\hat{\mathcal{W}}}(X + \gamma u \otimes u)}\right\| _F \right) + \gamma \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F^2}{\left\| {P_{\hat{\mathcal{W}}}(X + \gamma u \otimes u)}\right\| _F} \\&>\frac{- \lambda \gamma \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F + \gamma \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F^2}{\left\| {P_{\hat{\mathcal{W}}}(X + \gamma u \otimes u)}\right\| _F} > 0. \end{aligned} \end{aligned}$$
(41)

We show now that \(F_{\gamma }(X) = X\) implies \(\lambda = \left\| {P_{\hat{\mathcal{W}}}(u_1(X) \otimes u_1(X))}\right\| _F\). We notice that \(F_{\gamma }(X) = X\) implies \(\lambda (F_{\gamma }(X)) = \lambda \), and thus \(\lambda \ge \Vert P_{\hat{\mathcal{W}}}(u \otimes u)\Vert _F\) according to (39). Since generally \(\lambda \le \Vert P_{\hat{\mathcal{W}}}(u \otimes u)\Vert _F\) by Lemma 19, equality follows.

We address now the converse, i.e., \(\lambda = \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F\) implies \(F_{\gamma }(X) = X\), and we note that \(\lambda = \Vert P_{\hat{\mathcal{W}}}(u \otimes u)\Vert _F\) implies \(X = \lambda ^{-1}P_{\hat{\mathcal{W}}}(u \otimes u)\) by Lemma 19. Using this and the definition of \(F_{\gamma }(X)\), we get

$$\begin{aligned} F_{\gamma }(X)&= \frac{P_{\hat{\mathcal{W}}}(X + \gamma u \otimes u)}{\left\| {P_{\hat{\mathcal{W}}}(X + \gamma u \otimes u)}\right\| _F} = \frac{(\lambda ^{-1} + \gamma ) P_{\hat{\mathcal{W}}}(u \otimes u)}{(\lambda ^{-1} + \gamma )\left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F}\\&= \frac{P_{\hat{\mathcal{W}}}(u \otimes u)}{\left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F} = X \end{aligned}$$

To conclude the proof it remains to show \(\lambda _1(F(X)) > \lambda \) implies \(0< \lambda < \left\| {P_{\hat{\mathcal{W}}}(u \otimes u)}\right\| _F\). As \(\lambda \le \Vert P_{\hat{\mathcal{W}}}(u \otimes u)\Vert _F\) and \(\lambda _1(F(X)) > \lambda \) implies \(F_{\gamma }(X) \ne X\) and therefore \(\lambda \ne \Vert P_{\hat{\mathcal{W}}}(u \otimes u)\Vert _F\), then necessarily \(\lambda <\Vert P_{\hat{\mathcal{W}}}(u \otimes u)\Vert _F\).

The preceding Lemma implies the convergence of \((\lambda _1(M_j))_{j \in \mathbb {N}}\) by monotonicity. Moreover, we can also use such convergence to establish \(\Vert M_{j+1}-M_{j} \Vert _F \rightarrow 0\).

Lemma 21

Let \(\gamma > 0\), \(M_0 \in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(\lambda _1(M_0) > 0\), and let \(M_j := F_{\gamma }(M_{j-1})\). The sequence \((\lambda _1(M_j))_{j \in \mathbb {N}}\) converges to a well-defined limit \(\lambda _{\infty }\), and \(\lim _{j\rightarrow \infty } \Vert M_{j+1}-M_{j} \Vert _F= 0\).

Proof

Denote \(U_j := P_{\hat{\mathcal{W}}}(u(M_j) \otimes u(M_j))\), \(\lambda _j = \lambda (M_j)\) for simplicity. The sequence \((\lambda _j)_{j\in \mathbb {N}}\) is monotone in the bounded domain [0, 1] by Lemma 20 and therefore converges to a limit \(\lambda _{\infty }\). To prove \(\Vert M_{j+1}-M_{j} \Vert _F\rightarrow 0\), we will exploit \((\lambda _{j+1} - \lambda _{j})\rightarrow 0\). We first have \((\left\| {U_j}\right\| _F - \lambda _j) \rightarrow 0\) since (41) yields

$$\begin{aligned} \lambda _{j+1} - \lambda _{j} \ge \frac{\gamma \left\| {U_j}\right\| _F}{\left\| {M_j + \gamma U_j}\right\| }\left( \left\| {U_j}\right\| _F - \lambda _j\right) \ge \frac{\gamma }{1+\gamma }\left\| {U_j}\right\| _F\left( \left\| {U_j}\right\| _F - \lambda _j\right) , \end{aligned}$$

and \(\left\| {U_j}\right\| _F \ge \lambda _j \ge \lambda _0\) for all j. Define the shorthand \(\Delta _j := \left\| {U_j}\right\| _F - \lambda _j\). We will now show that \(\left\| {M_{j+1} - M_j}\right\| _F \le C\Delta _j\) for some constant C. First notice that

$$\begin{aligned} \left\| {M_j - \lambda _j^{-1}U_j}\right\| _F= & {} \sqrt{1 + \frac{\left\| {U_j}\right\| _F^2}{(\lambda _j)^2} - 2\lambda _j^{-1}\left\langle M_j, U_j\right\rangle } = \sqrt{\frac{\left\| {U_j}\right\| _F^2}{(\lambda _j)^2} - 1} \\= & {} \sqrt{\frac{\left\| {U_j}\right\| _F^2 - (\lambda _j)^2}{(\lambda _j)^2}} \le \lambda _0^{-1}\sqrt{2\Delta _j}. \end{aligned}$$

Therefore there exists a matrix \(E_j\) with \(M_j = \lambda _j^{-1}U_j + E_j\) and \(\left\| {E_j}\right\| \le \lambda _0^{-1}\sqrt{2\Delta _j}\). Furthermore, by the triangle inequality we have

$$\begin{aligned} \left\| {M_{j+1} - M_j}\right\| _F \le \left\| {M_{j+1} - \lambda _j^{-1}U_j}\right\| _F + \lambda _{0}^{-1}\sqrt{2\Delta _j}, \end{aligned}$$

hence it remains to bound the first term. Using \(M_j = \lambda _j^{-1}U_j + E_j\) and \(M_{j+1} = \left\| {M_j + \gamma U_j}\right\| _F^{-1} (M_j + \gamma U_j)\), we have \(\left\| {M_j + \gamma U_j}\right\| _FM_{j+1} = (\lambda _j^{-1} + \gamma )U_j + E_j\) and thus

$$\begin{aligned}&\left\| {\left\| {M_j + \gamma U_j}\right\| _F(M_{j+1}- \lambda _j^{-1}U_j)}\right\| _F \\&\quad = \left\| {(\lambda _j^{-1}+\gamma )U_j + E_j -\left\| {(\lambda _j^{-1}+\gamma )U_j + E_j}\right\| _F \lambda _j^{-1} U_j}\right\| _F\\&\quad \le \left| {\lambda _j^{-1}+\gamma - \left\| {(\lambda _j^{-1}+\gamma )U_j + E_j}\right\| _F\lambda ^{-1}}\right| \left\| {U_j}\right\| _F + \left\| {E_j}\right\| _F\\&\quad \le \left( (\lambda _j^{-1}+\gamma )\left\| {U_j}\right\| _F\lambda _j^{-1} - (\lambda _j^{-1}+\gamma ) + 2\left\| {E_j}\right\| _F \lambda _j^{-1}\right) \left\| {U_j}\right\| _F + \left\| {E_j}\right\| _F\\&\quad \le (\lambda _j^{-1}+\gamma )\left( \left\| {U_j}\right\| _F\lambda _j^{-1} - 1\right) \left\| {U_j}\right\| _F + (1 + 2\lambda _0^{-1})\left\| {E_j}\right\| _F \\&\quad \le (\lambda _0^{-1} + \gamma )\lambda _0^{-1}\Delta _j + (1 + 2\lambda _0^{-1})\sqrt{\Delta _j}. \end{aligned}$$

Since \(\left\| {M_j + \gamma U_j}\right\| _F \ge 1\) according to Lemma 17, \(\left\| {M_{j+1} - M_j}\right\| \rightarrow 0\) follows.

It remains to show that convergent subsequences of \((M_j)_{j \in \mathbb {N}}\) converge to fixed points of \(F_{\gamma }\). Then by (40), Lemmas 18 and 19, fixed points satisfy the first-order optimality condition (32) and are stationary points of (31). To prove convergence of subsequences to fixed points, we require continuity of \(F_{\gamma }\). The following Lemma shows that \(F_{\gamma }\) is continuous for matrices X satisfying \(\lambda _1(X) > 1/\sqrt{2}\), i.e., if the largest eigenvector is isolated and \(u_1(X)\) is a continuous function of X.

Lemma 22

Let \(\gamma > 0\), \(\epsilon > 0\) arbitrary, and define \(\mathcal{M}_{\epsilon }:= \{M \in \hat{\mathcal{W}}\cap \mathbb {S}: \lambda (M) \ge (\frac{1}{2} + \epsilon )^{-1/2}\}\). Then \(F_{\gamma }(X) \in \mathcal{M}_{\epsilon }\) for all \(X \in \mathcal{M}_{\epsilon }\), and \(F_{\gamma }\) is \(\left\| {\cdot }\right\| _F\)-Lipschitz continuous, with Lipschitz constant \((1 + \gamma /\epsilon )\).

Proof

\(F_{\gamma }(X) \in \mathcal{M}_{\epsilon }\) follows directly from Lemma 20, i.e., from the fact that the largest eigenvalue is only increased by applying \(F_{\gamma }\). For the continuity, consider \(X,Y \in \mathcal{M}_{\epsilon }\). We first note that by using [6, Theorem 7.3.1] and \(\lambda _i(Y) \le \sqrt{1/2 - \varepsilon }\) for \(i=2,\ldots ,m_0\) we get

$$\begin{aligned}&\left\| {X+\gamma P_{\hat{\mathcal{W}}}(u_1(X)\otimes u_1(X)) - Y-\gamma P_{\hat{\mathcal{W}}}(u_1(Y)\otimes u_1(Y))}\right\| _F\\&\quad \le \left\| {X-Y}\right\| _F + \gamma \left\| {u_1(X)\otimes u_1(X) - u_1(Y)\otimes u_1(Y)}\right\| _F\\&\quad \le \left\| {X-Y}\right\| _F + \gamma \frac{\left\| {X - Y}\right\| _F}{\sqrt{\frac{1}{2} + \varepsilon } - \sqrt{\frac{1}{2} - \epsilon }} \le \left( 1 + \frac{\gamma }{\epsilon }\right) \left\| {X - Y}\right\| _F. \end{aligned}$$

Furthermore, we have \(\left\| {X + \gamma P_{\hat{\mathcal{W}}}(u_1(X)\otimes u_1(X))}\right\| _F^2 \ge 1\) according to Lemma 17, and therefore, \(P_{\mathbb {S}}\) acts on \(X + \gamma P_{\hat{\mathcal{W}}}(u_1(X)\otimes u_1(X))\) and \(Y + \gamma P_{\hat{\mathcal{W}}}(u_1(Y)\otimes u_1(Y))\) as a projection onto the convex set \(\{X : \left\| {X}\right\| _F \le 1\}\). Therefore it acts as a contraction and the result follows from

$$\begin{aligned} \left\| {F_{\gamma }(X) - F_{\gamma }(Y)}\right\| _F \le \left\| {X+\gamma P_{\hat{\mathcal{W}}}(u_1(X)\otimes u_1(X)) - Y-\gamma P_{\hat{\mathcal{W}}}(u_1(Y)\otimes u_1(Y))}\right\| _F. \end{aligned}$$

The convergence to fixed points of any subsequence of \((M_j)_{j \in \mathbb {N}}\) now follows as a corollary of Lemma 34 in the Appendix.

Theorem 23

Let \(\epsilon > 0\), \(\gamma > 0\), \(M_0 \in \hat{\mathcal{W}}\cap \mathbb {S}\) with \(\lambda (M_0) \ge 1/\sqrt{2} + \varepsilon \) and let \(M_{j+1} := F_{\gamma }(M_{j})\) as generated by Algorithm 3. Then \((M_{j+1})_{j \in \mathbb {N}}\) has a convergent subsequence, and any such subsequence converges to a fixed point of \(F_{\gamma }\), respectively, a stationary point of (31).

Proof

By Lemma 22, the operator \(F_{\gamma }\) is continuous on \(\mathcal{M}_{\epsilon }:= \{M \in \hat{\mathcal{W}}\cap \mathbb {S}: \lambda _1(M) \ge (\frac{1}{2} + \epsilon )^{-1/2}\}\) for any \(\epsilon > 0\). Moreover, by Lemma 20 we have \((M_{j+1})_{j \in \mathbb {N}} \subset \mathcal{M}_{\epsilon }\), and by Lemma 21 we have \(\left\| {M_{j+1} - M_j}\right\| _F \rightarrow 0\). Therefore we can apply Lemma 34 to see that any convergent subsequence converges to a fixed point of \(F_{\gamma }\). Moreover, since \((M_{j+1})_{j \in \mathbb {N}}\) is bounded, there exists at least one convergent subsequence by Bolzano–Weierstrass. Finally, any fixed point \(\bar{M}\) of \(F_\gamma \) can be written as \(\bar{M} = \lambda _1(\bar{M})P_{\hat{\mathcal{W}}}(u_1(\bar{M})\otimes u_1(\bar{M}))\) by Lemma 19 and Lemma 20. Since \(\lambda _1(\bar{M}) > 1/\sqrt{2}\), it is an isolated eigenvalue satisfying \(\lambda _1(\bar{M}) = \left\| {\bar{M}}\right\| \), and thus, \(\bar{M}\) satisfies the first order optimality condition (32) of (31) by Theorem 13.

Remark 24

The analysis of the convergence of Algorithm 3 we provide above does not use the structure of the space \(\mathcal{W}\) and it focuses exclusively on the behavior of the first eigenvalue \(\lambda _1\). As a consequence, it does guarantee that its iterations have monotonically increasing spectral norm and that they generically converges to stationary points of (31). However, it does not ensure convergence to nonspurious, minimal rank local minimizers of (31). In the numerical experiments of Sect. 4, where \(\{w_j:j\in [m]\}\) are sampled randomly from certain distributions, an overwhelming majority of sequences \((M_{j})_{j\in \mathbb {N}}\) converges to a near rank-1 matrix with an eigenvalue close to one, whose corresponding eigenvector approximates a network profile with good accuracy. To explain this success, we would need a finer and quantitative analysis of the increase of the spectral norm during the iterations, for instance by quantifying the gap

$$\begin{aligned} \left[ \Theta \left\| {P_{\hat{\mathcal{W}}}(u_1(X) \otimes u_1(X))}\right\| _F -\lambda _1(X) \right] \ge 0, \end{aligned}$$

by means of a suitable constant \(0<\Theta <1\). As clarified in the proof of Lemma 19, the smaller the constant \(\Theta >0\) is, the larger is the increase of the spectral norm \(\Vert M_{j+1} \Vert > \Vert M_j\Vert \) between iterations of Algorithm 3. The following result is an attempt to gain a quantitative estimate for \(\Theta \) by injecting more information about the structure of the space \(\mathcal{W}\).

In order to simplify the analysis, let us assume \(\delta =0\) or \(\hat{\mathcal{W}}= \mathcal{W}\).

Proposition 25

Assume that \(\{W_\ell :=w_\ell \otimes w_\ell : \ell \in [m] \}\) forms a frame for \(\mathcal{W}\), i.e., there exist constants \(c_\mathcal{W},C_\mathcal{W}>0\) such that for all \(X \in \mathcal{W}\)

$$\begin{aligned} c_\mathcal{W}\Vert X\Vert _F^2 \le \sum _{\ell =1}^m \langle X, w_\ell \otimes w_\ell \rangle _F^2 \le C_\mathcal{W}\Vert X\Vert _F^2. \end{aligned}$$

Denote \(\{ \tilde{W}_\ell : \ell \in [m] \}\) the canonical dual frame so that

$$\begin{aligned} P_{\mathcal{W}} (X) = \sum _{\ell =1}^m \langle X, \tilde{W}_\ell \rangle _F W_\ell , \end{aligned}$$

for any symmetric matrix X. Then, for \(X \in \mathcal{W}\) and the notation \(\lambda _j:=\lambda _j(X)\), \(\lambda _1 =\Vert X\Vert \) and \(u_j:=u_j(X)\), we have

$$\begin{aligned} \lambda _1= & {} \Vert P_{\mathcal{W}} (u_1 \otimes u_1) \Vert _F\left( \sum _{j=1}^{m_0} \sum _{\ell =1}^m \lambda _j \frac{\langle u_j \otimes u_j, \tilde{W}_\ell \rangle _F \langle W_\ell , u_1 \otimes u_1 \rangle _F }{\Vert P_{\mathcal{W}} (u_1 \otimes u_1) \Vert _F} \right) \nonumber \\\le & {} \Vert P_{\mathcal{W}} (u_1 \otimes u_1) \Vert _F \underbrace{\left( \frac{C_\mathcal{W}}{c_\mathcal{W}}\right) ^{1/2} \left( \sum _{\lambda _j >0} \lambda _j \Vert P_{\mathcal{W}} (u_j \otimes u_j) \Vert _F \right) }_{:=\Theta } \end{aligned}$$
(42)

Proof

Let us fix \(X \in \mathcal{W}\). Then we have two ways of representing X, its frame decomposition and its spectral decomposition:

$$\begin{aligned} X= \sum _{\ell =1}^m \langle X, \tilde{W}_\ell \rangle _F W_\ell = \sum _{j=1}^{m_0} \lambda _j u_j \otimes u_j. \end{aligned}$$

By using both the decompositions and again the notation \(W_\ell = w_\ell \otimes w_\ell \), we obtain

$$\begin{aligned} \lambda _1= & {} u_1^T X u_1 =\sum _{\ell =1}^m \left\langle \sum _{j=1}^{m_0} \lambda _j u_j \otimes u_j, \tilde{W}_\ell \right\rangle _F u_1^T W_\ell u_1 \\= & {} \sum _{\ell =1}^m\left\langle \sum _{j=1}^{m_0} \lambda _j u_j \otimes u_j, \tilde{W}_\ell \right\rangle _F \left\langle w_\ell , u_1\right\rangle ^2 \\= & {} \sum _{\ell =1}^m\sum _{j=1}^{m_0}\lambda _j \langle u_j \otimes u_j, \tilde{W}_\ell \rangle _F \left\langle w_\ell , u_1\right\rangle ^2 \\= & {} \Vert P_{\mathcal{W}} (u_1 \otimes u_1) \Vert _F\left( \sum _{j=1}^{m_0} \sum _{\ell =1}^m \lambda _j \frac{\langle u_j \otimes u_j, \tilde{W}_\ell \rangle _F \langle w_\ell , u_1 \rangle ^2 }{\Vert P_{\mathcal{W}} (u_1 \otimes u_1) \Vert _F} \right) . \end{aligned}$$

By observing that \(\sum _{\ell =1}^m\langle u_j \otimes u_j, \tilde{W}_\ell \rangle _F^2 \le A^{-1} \Vert P_{\mathcal{W}}(u_j \otimes u_j)\Vert _F^2\) (canonical dual frame upper bound), and using Cauchy–Schwarz inequality, we can further estimate

$$\begin{aligned} \lambda _1\le & {} \Vert P_{\mathcal{W}} (u_1 \otimes u_1) \Vert _F \left( c_\mathcal{W}^{-1/2} \sum _{\lambda _j>0} \frac{\lambda _j \Vert P_{\mathcal{W}} (u_j \otimes u_j) \Vert _F}{\Vert P_{\mathcal{W}} (u_1 \otimes u_1) \Vert _F} \right) \left( \sum _{\ell =1}^m\langle w_\ell , u_1 \rangle ^4 \right) ^{1/2} \\\le & {} \Vert P_{\mathcal{W}} (u_1 \otimes u_1) \Vert _F \left( \frac{C_\mathcal{W}}{c_\mathcal{W}}\right) ^{1/2} \left( \sum _{\lambda _j >0} \lambda _j \Vert P_{\mathcal{W}} (u_j \otimes u_j) \Vert _F \right) , \end{aligned}$$

where in the last inequality we applied the estimates

$$\begin{aligned} \sum _{\ell =1}^m\langle w_\ell , u_1 \rangle ^4= & {} \sum _{\ell =1}^m\langle w_\ell \otimes w_\ell , u_1 \otimes u_1 \rangle _F^2 \\\le & {} C_\mathcal{W}\Vert P_{\mathcal{W}}(u_1 \otimes u_1)\Vert _F^2. \end{aligned}$$

The meaning of estimate (42) is explained by the following mechanism: Whenever the deviation of an iteration \(M_j\) of Algorithm 3 from being a rank-1 matrix in \(\mathcal{W}\) is large, in the sense that \(\Vert P_{\mathcal{W}} (u_1 \otimes u_1) \Vert _F\) is small, the constant \(\Theta = \left( \frac{C_\mathcal{W}}{c_\mathcal{W}}\right) ^{1/2} \left( \sum _{\lambda _j >0} \lambda _j \Vert P_{\mathcal{W}} (u_j \otimes u_j) \Vert _F \right) \) is also small and the iteration \(M_{j+1} = F_\gamma (M_j)\) will efficiently increase the spectral norm. The gain will reduce as soon as the iteration \(M_j\) gets closer and closer to a rank-1 matrix. It would be perhaps possible to get an even more precise analysis of the behavior of Algorithm 3, by considering simultaneously the dynamics of (the gaps between) different eigenvalues (not only focusing on \(\lambda _1\)). Unfortunately, we could not find yet a proper and conclusive argument.

4 Numerical Experiments About the Recovery of Network Profiles

In this section, we present numerical experiments about the recovery of network weights \(\{a_i : i \in [m_0]\}\) and \(\{v_\ell : \ell \in [m_1]\}\) from few point queries of the network. The recovery procedure leverages the theoretical insights that have been provided in previous sections. Without much loss of generality, we neglect the active subspace reduction and focus on the case \(d = m_0\). We construct an approximation \(P_{\hat{\mathcal{W}}} \approx P_{\mathcal{W}}\) using Algorithm 2. Then we randomly generate a number of matrices \(\{M^k_0: k \in [K]\} \in \hat{\mathcal{W}}\cap \mathbb {S}\) and compute the sequences \(M^k_{j+1} = F_{\gamma }(M_j^k)\) as in Algorithm 3. For each limiting matrix \(\{M_{\infty }^{k}: k \in [K]\}\), we compute the largest eigenvector \(u_1(M_{\infty }^{k})\), and then cluster \(\{u_1(M_{\infty }^{k}): k \in [K]\}\) into \(m=m_0+ m_1\) classes using kMeans++. After projecting the resulting cluster centers onto \(\mathbb {S}^{d-1}\), we obtain vectors \(\{\hat{w}_j: j \in [m_0+ m_1]\}\) that are used as approximations to \(\{a_i: i\in [m_0]\}\) and \(\{v_\ell : \ell \in [m_1]\}\).

We perform experiments for different scenarios, where either the activation function or the construction of the network weights varies. Guided by our theoretical results, we pay particular attention to how the network architecture, e.g., \(m_0\) and \(m_1\), influences the simulation results. The entire procedure is rather flexible and can be adjusted in different ways, e.g., changing the distribution \(\mu _X\). To provide a fair account of the success, we fix hyperparameters of the approach throughout all experiments. Test scenarios, hyperparameters, and error measures are reported below in more detail. Afterwards, we present and discuss the results.

Scenarios and construction of the networks The network is constructed by choosing activation functions and network weights \(\{a_i : i \in [m_0]\}\), \(\{b_\ell : \ell \in [m_1]\}\), for which \(v_\ell \) is then defined via \(v_\ell = \frac{AG_0 b_\ell }{\left\| {AG_0 b_\ell }\right\| _2}\), see Definition 3. To construct activation functions, we set \(g_i(t) = \phi (t + \theta _i)\) for \(i \in [m_0]\), and \(h_\ell (t) = \phi (t + \tau _\ell )\) for \(\ell \in [m_1]\). We choose either \(\phi (t) = \tanh (t)\) or \(\phi (t) = \frac{1}{1+e^{-t}} - \frac{1}{2}\) (shifted sigmoid function), and sample offsets (called also biases) \(\theta _i\), \(\tau _\ell \) independently at random from \(\mathcal{N}(0,0.01)\).

As made clear by our theory, see Theorem 16, a sufficient condition for successful recovery of the entangled weights is \(\nu =C_F-1\) to be small, where \(C_F\) is the upper frame constant of the entangled weights as in Definition 3. In the following numerical experiments, we wish to verify how crucial is this requirement. Thus, we test two different scenarios for the weights. The first scenario, which is designed to best fulfill the sufficient condition \(\nu \approx 0\), models both \(\{a_i : i \in [m_0]\}\) and \(\{b_\ell : \ell \in [m_1]\}\) as perturbed orthogonal systems. For their construction, we first sample orthogonal bases uniformly at random and then apply a random perturbation. The perturbation is such that \((\sum _{i=1}^{m_0}(\sigma _i(A) - 1)^2)^{-1/2} \approx (\sum _{i=1}^{m_1}(\sigma _i(B) - 1)^2)^{-1/2} \approx 0.3\), where \(\sigma _i(A)\) and \(\sigma _i(B)\) denote singular values of A and B. In the second case, we sample the (entangled) weights independently from \({\text {Uni}}({\mathbb {S}^{m_0-1}})\). In this situation, as the dimensionality \(d=m_0\) is relatively small, the system will likely not fulfill well the condition \(\nu \approx 0\); however, as the dimension \(d=m_0\) is chosen larger, the weights tend to be more incoherent and gradually approaching the previous scenario.

Hyperparameters Unless stated differently, we sample \(m_X = 1000\) Hessian locations from \(\mu _X = \sqrt{m_0}\text {Uni}\left( \mathbb {S}^{m_0-1}\right) \), and use \(\epsilon = 10^{-5}\) in the finite difference approximation (17). We generate 1000 random matrices \(\{M_0^k: k \in [1000]\}\) by sampling \(m^k \sim \mathcal{N}(0,\mathsf {Id}_{m_0+ m_1})\), and by defining \(M_0^k := \mathbb {P}_{\mathbb {S}}(\sum _{i=1}^{m_0+ m_1}m_i^k u_i)\), where the \(u_i\)’s are as in Algorithm 2. The constant \(\gamma = 2\) is used in the definition of \(F_{\gamma }\), and the iteration is stopped if \(\lambda _1(M_{j+1}^k) - \lambda _1(M_{j}^k) < 10^{-5}\), or after 200 iterations. kMeans++ is run with default settings using sklearn. All reported results are averages over 30 repetitions.

Error measures Three error measures are reported:

  • the normalized projection error \(\frac{\Vert \hat{P}_{\mathcal{W}} - P_{\mathcal{W}}\Vert _F^2}{m_0+ m_1}\),

  • a false positive rate \(\text {FP}(T)=\frac{\#\{j : E(\hat{w}_j) > T\}}{m_0+ m_1}\), where \(T>0\) is a threshold, and E(u) is defined by,

    $$\begin{aligned} E(u) := \min _{w \in \{\pm a_i,\pm v_\ell : i \in [m_0],\ell \in [m_1]\}}\left\| {u - w}\right\| ^2_2, \end{aligned}$$
  • recovery rate \(\text {R}_a(T) = \frac{\#\{i : \mathcal{E}(a_i) < T\}}{m_0}\), and \(\text {R}_v(T) = \frac{\#\{\ell : \mathcal{E}(v_\ell ) < T\}}{m_1}\), where

    $$\begin{aligned} \mathcal{E}(u)&:= \min _{w \in \{\pm \hat{w}_j : j \in [m_0+m_1] \}}\left\| {u - w}\right\| ^2_2. \end{aligned}$$

Results for perturbed orthogonal weights

Fig. 2
figure 2

Error in approximating \(\mathcal{W}\) for perturbed orthogonal weights and different activation functions

Fig. 3
figure 3

False positive and recovery rates for perturbed orthogonal weights and for different activation functions

The results of the study are presented in Figs. 2 and 3 and show that our procedure typically recovers many of the network weights, while suffering only few false positives. Considering for example a sigmoidal network, we have almost perfect recovery of the weights in both layers at a threshold of \(T = 0.05\) for any network architecture, see Fig. 3a, c. For a \(\tanh \)-network, the performance is slightly worse, but we still recover most weights in the second layer, and a large portion in the first layer at a reasonable threshold, see Fig. 3b, d.

Inspecting the plots more closely, we can notice some shared trends and differences between sigmoid and \(\tanh \) networks. In both cases, the performance improves when increasing the input dimensionality or, equivalently, the number of neurons in the first layer, even though the number of weights that need to be recovered increases accordingly. This is particularly the case for \(\tanh \)-networks as visualized in Fig. 3b, d and is most likely caused by reduced correlation of the weights in higher dimensions. As previously mentioned, the correlation is encoded within the constant \(\nu =C_F-1\) used in the analysis in Sect. 3.

For fixed \(m_0\) on the other hand, different activation functions react differently to changes of \(m_1\). For \(m_1\) larger, considering a sigmoid network, the projection error increases, and the recovery of weights in the second layer worsens as shown in Fig. 2a, c. This is expected by Theorem 5. Inspecting the results for \(\tanh \) networks, the projection error actually decreases when increasing \(m_1\), see Fig. 2b, and the recovery performance gets better. Figure 3d shows that especially weights in the first layer are more easily recovered if \(m_1\) is large, such that the case \(m_0= 45\), \(m_1= 23\) allows for perfect recovery at a threshold \(T = 0.05\). This behavior cannot be fully explained by our general theory, e.g., Theorem 5.

Results for random weights from the unit sphere When sampling the weights independently from the unit sphere, the recovery problem seems more challenging for moderate dimension \(d=m_0\) and for both activation functions. This confirms the expectation that the smallness of \(\nu = C_F-1\) is somehow crucial. Figure 4c, d suggest that especially recovering the weights of the second layer is more difficult than in the perturbed orthogonal case. Still, we achieve good performance in many cases. For sigmoid networks, Fig. 4c shows that we always recover most weights in the first layer, and a large portion of weights in the second layer if \(m_1/m_0\) is small. Moreover, keeping \(m_1/m_0\) constant while increasing \(m_0\) improves the performance significantly, as we expect from an improved constant \(\nu = C_F-1\). Figure 4a, c show almost perfect recovery for \(m_0= 45,\ m_1= 5\), while suffering only few false positives.

For \(\tanh \)-networks, Fig. 4d shows that increasing \(m_0\) benefits recovery of weights in both layers, while increasing \(m_1\) benefits recovery of first layer weights and harms recovery of second layer weights. We still achieve small false positive rates in Fig. 4b, and good recovery for \(m_0= 45\), and the trend continues when further increasing \(m_0\).

Finally, a notable difference between the perturbed orthogonal case and the unit-sphere case is the behavior of the projection error \(\Vert P_{\hat{\mathcal{W}}}- P_{\mathcal{W}}\Vert _F/(m_0+m_1)\) for networks with sigmoid activation function. Comparing Figs. 2a and 5a, the dependency of the projection error on \(m_1\) is stronger when sampling independently from the unit-sphere. This is explained by Theorem 5 since \(\left\| {B}\right\| ^2\) is independent of \(m_1\) in the perturbed orthogonal case and grows like \(\mathcal {O}(\sqrt{m_1})\) when sampling from the unit-sphere.

5 Open Problems

With the previous theoretical results of Sect. 3 and the numerical experiments of Sect. 4, we show how to reliably recover the entangled weights \(\{\hat{w}_j: j \in [m_0+m_1]\} \approx \{w_j: j \in [m_0+m_1]\} = \{a_i: i \in [m_0]\} \cup \{v_\ell : \ell \in [m_1]\}\). However, some issues remain open.

  1. (i)

    In Theorem 5, the dependency of \(\alpha >0\) on the network architecture and on the input distribution \(\mu _X\) is left implicit. However, it plays a crucial role for fully estimating the overall sample complexity.

  2. (ii)

    While we could prove that Algorithm 3 is increasing the spectral norm of its iterates in \(\hat{\mathcal{W}}\cap \mathbb S\), we could not show yet that it converges always to nearly rank-1 matrices in \(\hat{\mathcal{W}}\), despite it is so numerically observed, see also Remark 24. We also could not exclude the existence of spurious local minimizers of the nonlinear program (31), as stated in Theorem 16. However, we conjecture that there are none or that they are somehow hard to observe numerically.

  3. (iii)

    Obtaining the approximating vectors \(\{\hat{w}_j: j \in [m_0+m_1]\} \approx \{w_j: j \in [m_0+m_1]\} = \{a_i: i \in [m_0]\} \cup \{v_\ell : \ell \in [m_1]\}\) does not suffice to reconstruct the entire network. In fact, it is impossible a priori to know whether \(\hat{w}_j\) approximates one \(a_i\) or some other \(v_\ell \), up to sign and permutations, and the attribution to the corresponding layer needs to be derived from quering the network.

  4. (iv)

    Once we obtained, up to sign and permutations, \(\{\hat{a}_i: i \in [m_0]\} \approx \{a_i: i \in [m_0]\}\) and \( \{\hat{v}_\ell : \ell \in [m_1]\} \approx \{v_\ell : \ell \in [m_1]\}\) from properly grouping \(\{\hat{w}_j: j \in [m_0+m_1]\}\), it would remain to approximate/identify the activations functions \(g_i\) and \(h_\ell \). In the case where \(g_i(\cdot ) = \phi (\cdot - \theta _i)\) and \(h_\ell (\cdot ) = \phi (\cdot - \tau _\ell )\), this would simply mean to be able to identify the shifts \(\theta _i\), \(i \in [m_0]\), and \(\tau _\ell \), \(\ell \in [m_1]\). Such identification is also crucial for computing the matrix \(G_0={\text {diag}}(g_i'(0),\dots ,g_{m_0}'(0))\) which allows the disentanglement of the weights \(b_\ell \) from the weights A and \(v_\ell = AG_0 b_\ell /\Vert AG_0 b_\ell \Vert _2\). At this point, the network is fully reconstructed.

  5. (v)

    The generalization of our approach to networks with more than two hidden layers is clearly the next relevant issue to be considered as a natural development of this work.

Fig. 4
figure 4

False positive and recovery rates for weights sampled uniformly at random from the unit sphere and for different activation functions

Fig. 5
figure 5

Error in approximating \(\mathcal{W}\) for weights sampled independently from the unit sphere and for different activation functions

While problems (i) and (ii) seem to be difficult to solve by the methods we used in this paper, we think that problems (iii) and (iv) are solvable both theoretically and numerically with just a bit more effort. For a self-contained conclusion of this paper, in the following sections we sketch some possible approaches to these issues, as a glimpse towards future developments, which will be more exhaustively included in [21]. The generalization of our approach to networks with more than two hidden layers as mentioned in (v) is surprisingly simpler than one may expect, and it is in the course of finalization [21]. For a network \(f(x):= f(x;W_1,\dots ,W_L) = 1^T g_L(W_L^T g_{L-1}(W_{L-1}^T \dots (g_1 (W_1^T x))\dots )\), with \(L>2\), again by second order differentiation is possible to obtain an approximation space

$$\begin{aligned} \hat{\mathcal{W}}\approx & {} {\text {span}} \{w_ {1, i} \otimes w_ {1, i}, (W_2 G_1 w_ {1, j}) \otimes (W_2 G_1 w_ {1, j}), \\&\dots , (W_L G_L \dots W_2 G_1 w_ {1, j}) \otimes (W_L G_L \dots W_2 G_1 w_ {1, j})\}, \end{aligned}$$

of the matrix space spanned by the tensors of entangled weights, where \(G_i\) are suitable diagonal matrices depending on the activation functions. The tensors \((W_kG_k \dots W_2 G_1 w_ {1, j}) \otimes (W_k^ TG_k \dots W_2^ T G_1 w_ {1, j})\) can be again identified by a minimal rank principle. The disentanglement goes again by a layer by layer procedure as in this paper, see also [5].

6 Reconstruction of the Entire Network

In this section, we address problems (iii) and (iv) as described in Sect. 5. Our final goal is of course to construct a two-layer network \(\hat{f}\) with number of nodes equaling \(m_0\) and \(m_1\) such that \(\hat{f} \approx f\). Additionally we also study whether the individual building blocks (e.g., matrices \(\hat{A}\), \(\hat{B}\), and biases in both layers) of \(\hat{f}\) match their corresponding counterparts of f.

To construct \(\hat{f}\), we first discuss how recovered entangled weights \(\{\hat{w}_{i} : i \in [m_0+m_1]\}\) (see Sect. 4) can be assigned to either the first, or the second layer, depending on whether \(\hat{w}_{j}\) approximates one of the \(a_i\)’s, or one of the \(v_{\ell }\)’s. Afterwards we discuss a modified gradient descent approach that optimizes the deparametrized network (its entangled weights are known at this point!) over the remaining, unknown parameters of the network function, e.g., biases \(\theta _i\) and \(\tau _\ell \).

6.1 Distinguishing First and Second Layer Weights

Attributing approximate entangled weights to first or second layer is generally a challenging task. In fact, even the true weights \(\{a_i: i \in [m_0] \}\), \(\{v_\ell : \ell \in [m_1]\}\) cannot be assigned to the correct layer based exclusively on their entries when no additional a priori information (e.g., some distributional assumptions) is available. Therefore, assigning \(\hat{w}_{j}\), \(j \in [m_0+ m_1]\) to the correct layer requires using again the network f itself and thus to query additional information.

The strategy we sketch here is designed for sigmoidal activation functions and networks with (perturbed) orthogonal weights in each layer. Sigmoidal functions are monotonic, have bell-shaped first derivative, and are bounded by two horizontal asymptotes as the input tends to \(\pm \infty \). If activation functions \(\{g_i: i \in [m_0]\}\) and \(\{h_\ell : \ell \in [m_1]\}\) are translated sigmoidal, their properties imply

$$\begin{aligned} \left\| {\nabla f(tw)}\right\| _2 = \left( \sum _{i=1}^{m_0} g_i'( t a_i^T w)^2\left( \sum ^{m_1}_{\ell =1} h_\ell '(b_\ell ^T g(tA^T w))b_{i\ell }\right) ^2\right) ^{\frac{1}{2}} \rightarrow 0,\quad \text {as } t\rightarrow \infty , \end{aligned}$$
(43)

whenever any direction w has nonzero correlation \(a_i^T w \ne 0\) with each first layer neuron in \(\{a_i:i \in [m_0]\}\).

Assume now that \(\{a_i : i \in [m_0]\}\) is a perturbed orthonormal system, and that second layer weights \(\{b_{\ell } : \ell \in [m_1]\}\) are generic and dense (nonsparse). Recalling the definition \(v_\ell = AG_0b_\ell /\left\| {AG_0b_\ell }\right\| \), the vector \(v_\ell \) has, in this case, generally nonzero angle with each vector in \(\{a_i : i \in [m_0]\}\), while \(a_i^T a_j \approx 0\) for any \(i \ne j\). Utilizing this with observation (43), it follows that \(\left\| {\nabla f(t a_i)}\right\| \) is expected to tend to 0 much slower than \(\left\| {\nabla f(t v_{\ell }}\right\| \) as \(t\rightarrow \infty \). In fact, if \(\{a_i : i \in [m_0]\}\) was an exactly orthonormal system, \(\left\| {\nabla f(t a_i)}\right\| \) eventually would equal a positive constant when \(t\rightarrow \infty \). We illustrate in Fig. 6 the different behavior of the trajectories \(t \rightarrow \left\| {\nabla f(tw)}\right\| _2\) for \(w \in \{\hat{w}_j \approx a_i \text{ for } \text{ some } i\}\) and for \(w \in \{\hat{w}_j \approx v_\ell \text{ for } \text{ some } \ell \}\).

Fig. 6
figure 6

We illustrate the trajectories \(t \rightarrow \left\| {\nabla f(tw)}\right\| _2\) for \(w \in \{\hat{w}_j: j \in [m]\}\). The blue trajectories are those for \(w \in \{\hat{w}_j \approx a_i \text{ for } \text{ some } i\}\) and the red trajectories are those for \(w \in \{\hat{w}_j \approx v_\ell \text{ for } \text{ some } \ell \}\). We can observe the separation of the trajectories due to the different decay properties

Practically, for \(T \in \mathbb N\) and for each candidate vector in \(\{\hat{w}_{j} : j\in [m_0+m_1]\}\) we query f to compute \(\Delta _\epsilon f(t_k\hat{w}_j)\) for few steps \(\{t_{k} : k \in [T]\}\) in order to approximate

$$\begin{aligned} \left\| {\left\| {\nabla f(t\hat{w}_j)}\right\| _2}\right\| _{L_2([-\infty ,\infty ])}^2 \approx \sum _{k=1}^{T} \left\| {\nabla f(t_k\hat{w}_j)}\right\| ^2 \approx \sum _{k=1}^{T} \left\| {\Delta _\epsilon f(t_k\hat{w}_j)}\right\| ^2 := \hat{\mathcal{I}}(w_j). \end{aligned}$$

Then we compute a permutation \(\pi : [m] \rightarrow [m]\) to order the weights so that \(\hat{\mathcal{I}}(w_{\pi (i)}) \ge \hat{\mathcal{I}}(w_{\pi (j)})\) whenever \(\pi (i) > \pi (j)\). The candidates \(\{w_{\pi (j)} : j=1,\ldots ,m_0\}\) have the slowest decay, respectively, largest norms and are thus assigned to the first layer. The remaining candidates \(\{w_{\pi (\ell )}: \ell =m_0+1,\ldots ,m_1\}\) are assigned to the second layer.

Table 1 Success rates \(\mathcal{L}_1\) and \(\mathcal{L}_2\) (see (44)) when assigning candidates \(\{\hat{w}_{i} : i \in [m_0+m_1]\}\) to either first or second layer of the network

Numerical experiments We have applied the proposed strategy to assign vectors \(\{\hat{w}_{j} : j \in [m_0+m_1]\}\), which are outputs of experiments conducted in Sect. 4, to either the first or the second layer. Since each \(\hat{w}_j\) does not exactly correspond to a vector in \(\{a_i : i \in [m_0]\}\) or \(\{v_\ell : \ell \in [m_1]\}\), we assign a ground truth label \(L_j = 1\) to \(\hat{w}_j\) if the closest vector to \(\hat{w}_j\) belongs to \(\{a_i : i \in [m_0]\}\), and \(L_j = 2\) if it belongs to the set \(\{v_\ell : \ell \in [m_1]\}\). Denoting similarly the predicted label \(\hat{L}_j = 1\) if \(\pi (j) \in \{1,\ldots ,m_0\}\) and \(\hat{L}_j = 2\) otherwise, we compute the success rates

$$\begin{aligned} \mathcal{L}_1 := \frac{\#\{j : L_j = 1 \text { and } \hat{L}_j = 1\}}{m_0},\quad \mathcal{L}_2 := \frac{\#\{j : L_j = 2 \text { and } \hat{L}_j = 2\}}{m_1} \end{aligned}$$
(44)

to assess the proposed strategy. Hyperparameters are \(\epsilon = 10^{-5}\) for the step length in the finite difference approximation \(\Delta _\epsilon f(\cdot )\), and \(t_k = - 20 + k\) for \(k \in [40]\).

The results for all four scenarios considered in Sect. 4 are reported in Table 1. We see that our simple strategy achieves remarkable success rates, in particular if the network weights in each layer represent perturbed orthogonal systems. If the weights are sampled uniformly from the unit sphere with moderated dimension \(d=m_0\), then, as one may expect, the success rate drops. In fact, for small \(d=m_0\), the vectors \(\{a_i : i \in [m_0]\}\) tend to be less orthogonal, and thus, the assumption \(a_i^T a_j \approx 0\) for \(i\ne j\) is not satisfied anymore.

Finally, we stress that the proposed strategy is simple, efficient and relies only on few additional point queries of f that are negligible compared to the recovery step itself (for reasonable query size T). In fact, the method relies on a single (nonlinear) feature of the map \(t \mapsto \left\| {\nabla f(t\hat{w}_j)}\right\| _2\) in order to decide upon the label of \(\hat{w}_j\). We identify it as an interesting future investigation to develop more robust approaches, potentially using higher-dimensional features of trajectories \( t \rightarrow \left\| {\nabla f(t\hat{w}_j)}\right\| _2\), to achieve high success rates even if \(a_i^T a_j \approx 0\) for \(i\ne j\) may not hold anymore.

6.2 Reconstructing the Network Function Using Gradient Descent

The previous section allows assigning unlabeled candidates \(\{\hat{w}_j : j \in [m_0+ m_1]\}\) to either the first or second layer, resulting in matrices \(\hat{A} = [\hat{a}_1|\ldots |\hat{a}_{m_0}]\) and \(\hat{V} = [\hat{v}_1|\ldots |\hat{v}_{m_1}]\) that ideally approximate A and V up to column signs and permutations. Assuming that the network \(f \in \mathcal{F}(m_0,m_0, m_1)\) is generated by shifts of one activation function, i.e., \(g_i(t) = \phi (t + \theta _i)\) and \(h_\ell (t) = \phi (t + \tau _{\ell })\) for some \(\phi \), this means only signs, permutations, and bias vectors \(\theta \in \mathbb {R}^{m_0}\), \(\tau \in \mathbb {R}^{m_1}\) are missing to fully reconstruct f. In this section, we show how to identify these remaining parameters by applying a gradient descent method to minimize the least squares of the output misfit of the deparametrized network. In fact, as we clarify below, the original network f can be explicitly described as a function of the known entangled weights \(a_i\) and \(v_\ell \) and of the unknown remaining parameters (signs, permutations, and biases), see Proposition 26 and Corollary 27.

Let now \(\mathcal{D}_{m}\) denote the set of \(m\times m\) diagonal matrices, and define a parameter space \(\Omega := \mathcal{D}_{m_1}\times \mathcal{D}_{m_0}\times \mathcal{D}_{m_0} \times \mathbb {R}^{m_0}\times \mathbb {R}^{m_1}\). To reconstruct the original network f, we propose to fit parameters \((D_1, D_2, D_3, w, z) \in \Omega \) of a function \(\hat{f}: \mathbb {R}^{m_0}\times \Omega \rightarrow \mathbb {R}\) defined by

$$\begin{aligned} \hat{f}(x;D_1, D_2, D_3, w, z)&= 1^T\phi (D_1 \hat{V}^T \hat{A}^{-T} D_2 \phi (D_3 \hat{A}^T x + w) + z) \end{aligned}$$

to a number of additionally sampled points \(\{(X_i,Y_i) : i \in [m_f]\}\) where \(Y_i = f(X_i)\) and \(X_i \sim \mathcal{N}(0,\mathsf {Id}_{m_0})\). The parameter fitting can be formulated as solving the least squares

$$\begin{aligned} \min _{(D_1, D_2, D_3, w, z) \in \Omega } J(D_1,D_2,D_3, w, z):=\sum _{i=1}^{m_f}\left( Y_i - \hat{f}(X_i;D_1, D_2, D_3, w, z)\right) ^2. \end{aligned}$$
(45)

We note that, due to the identification of the entangled weights and deparametrization of the problem, \(\dim (\Omega ) = 3m_0+ 2m_1\), which implies that the least squares has significantly fewer free parameters compared to the number \(m_0^2 + (m_0\times m_1)+ (m_0+m_1)\) of original parameters of the entire network. Hence, our previous theoretical results of Sect. 3 and numerical experiments of Sect. 4 greatly scale down the usual effort of fitting all parameters at once. We may also mention at this point that the optimization (45) might have multiple global solutions due to possible symmetries, see also [20] and Remark 28, and we shall try to keep into account the most obvious ones in our numerical experiments.

We will now show that there exists parameters \((D_1, D_2, D_3, w, z) \in \Omega \) that allow for exact recovery of the original network, whenever \(\hat{A}\) and \(\hat{V}\) are correct up to signs and permutation. We first need the following proposition that provides a different reparametrization of the network using \(\hat{A}\) and \(\hat{V}\). The proof of the proposition requires only elementary linear algebra and properties of sign and permutation matrices. Details are deferred to Appendix 3.

Proposition 26

Let \(f \in \mathcal{F}(m_0,m_0, m_1)\) with \(g_i(t) = \phi (t + \theta _i)\) and \(h_\ell (t) = \phi (t + \tau _{\ell })\), and define the function \(\tilde{f}: \mathbb {R}^{m_0}\times \mathcal{D}_{m_0} \times \mathcal{D}_{m_1} \times \mathbb {R}^{m_0} \times \mathbb {R}^{m_1} \rightarrow \mathbb {R}\) via

$$\begin{aligned} \tilde{f}(x;D, D', w, z)&= 1^T\phi (D' \hat{B}^T \phi (D \hat{A}^T x + w) + z),\quad \text {with}\quad \hat{b}_l\\&:= \frac{{\text {diag}}\left( \left( \phi '(w)\right) ^{-1}\right) D\hat{A}^{-1}\hat{v}_\ell }{\left\| {{\text {diag}}\left( \left( \phi '(w)\right) ^{-1}\right) D \hat{A}^{-1}\hat{v}_\ell }\right\| }. \end{aligned}$$

If there are sign matrices \(S_A\), \(S_V\), and permutations \(\pi _A\), \(\pi _V\) such that \(A\pi _A = \hat{A} S_A\), \(V\pi _V = \hat{V} S_V\), then we have \(f(x) = \tilde{f}(x; S_A, S_V, \pi _A^T \theta , \pi _V^T \tau )\).

We note here that replacing \(\hat{f}\) by \(\tilde{f}\) in (45) is tempting because it further reduces the number of parameters (\(\dim (\mathcal{D}_{m_0} \times \mathcal{D}_{m_1} \times \mathbb {R}^{m_0} \times \mathbb {R}^{m_1}) = 2(m_0+ m_1)\)), but, by an explicit computation, one can show that evaluating the gradient of \(\tilde{f}\) with respect to D requires also the evaluation of \(D^{-1}\). Having in mind that D ideally converges to \(S_A\) during the optimization, diagonal entries of D are likely to cross zero while optimizing. Thus such minimization may result unstable, and we instead work with \(\hat{f}\). The following Corollary shows that also this form allows finding optimal parameters leading to the original network.

Corollary 27

Let \(f \in \mathcal{F}(m_0,m_0, m_1)\) with \(g_i(t) = \phi (t + \theta _i)\) and \(h_\ell (t) = \phi (t + \tau _{\ell })\). If there exist sign matrices \(S_A\), \(S_V\), and permutations \(\pi _A\), \(\pi _V\) such that \(A\pi _A = \hat{A} S_A\), \(V\pi _V = \hat{V} S_V\), there exist diagonal matrices \(D_1, D_2\) such that \(f(x) = \hat{f}(x;D_1, D_2, S_A, \pi _A^T \theta , \pi _V^T \tau )\).

Proof

Based on Proposition 26, we can rewrite \(f(x) = 1^T\phi (S_V \hat{B}^T \phi (S_A \hat{A}^T x + \pi _A^T w) + \pi _V^T z)\), so it remains to show that \(S_V \hat{B}^T = D_1 \hat{V}^T \hat{A}^{-T} D_2\) for diagonal matrices \(D_1\), \(D_2\). First we note

$$\begin{aligned} {\text {diag}}(\phi '(\pi _A^T \theta )^{-1}) = \pi _A^T {\text {diag}}(\phi '(\theta )^{-1}) \pi _A = \pi _A^T G^{-1} \pi _A. \end{aligned}$$

Using this, and \(D = S_A\) in the definition of \(\hat{B}\) in Proposition 26, it follows that

$$\begin{aligned} \hat{B}^T = {\text {diag}}(\Vert \pi _A^T G^{-1} \pi _A S_A \hat{A}^{-1} v_1 \Vert , \ldots , \Vert \pi _A^T G^{-1} \pi _A S_A \hat{A}^{-1} v_{m_1}\Vert ) \hat{V}^T \hat{A}^{-T}S_A\pi _A^T G^{-1} \pi _A \end{aligned}$$

Multiplying by \(S_V\) from the left, we obtain

$$\begin{aligned} S_V \hat{B}^T= & {} \underbrace{S_V {\text {diag}}(\Vert \pi _A^T G^{-1} \pi _A S_A \hat{A}^{-1} v_1 \Vert , \ldots , \Vert \pi _A^T G^{-1} \pi _A S_A \hat{A}^{-1} v_{m_1}\Vert )}_{=D_1}\\&\hat{V}^T \hat{A}^{-T}\underbrace{S_A \pi _A^T G^{-1} \pi _A}_{=D_2}. \end{aligned}$$

Remark 28

(Simplification for odd functions) If \(\phi \) in Proposition 26 satisfies \(\phi (-t) = -\phi (t)\), then \(\hat{f}(x; D_1, S D_2, S D_3, S w, z) = \hat{f}(x; D_1, D_2, D_3, w, z)\) for arbitrary sign matrix \(S \in \mathcal{D}_{m_0}\). Thus, choosing \(S = S_A\), there are also diagonal \(D_1\) and \(D_2\) with \(f(x) = \hat{f}(x; D_1, D_2, \mathsf {Id}_{m_0}, S_A \pi _A^T w, \pi _V^T \tau )\).

Assuming \(\hat{A}\) and \(\hat{V}\) are correct up to sign and permutation, Corollary 27 implies that \(J = 0\) is the global optimum, and it is attained by parameters leading to the original network f. Furthermore, Remark 28 implies that there is ambiguity with respect to \(D_3\), if \(\phi \) is an odd function. Thus we can also prescribe \(D_3 = \mathsf {Id}_{m_0}\) and neglect optimizing this variable if \(\phi \) is odd.

We now study numerically the feasibility of (45). First, we consider the case \(\hat{A} = A\) and \(\hat{V} = V\) to assess (45), isolated so not to suffer possible errors from other parts of our learning procedure (see Sects. 4 and 6.1). Afterwards we take into consideration also these additional approximations, and present results for \(\hat{A} \approx A\) and \(\hat{V} \approx V\).

Numerical experiments

Table 2 Errors of the reconstructed network using (45) when prescribing \(\hat{A} = A\) and \(\hat{V} = V\)

We minimize (45) by standard gradient descent and learning rate 0.5 if \(\phi (t) = \frac{1}{1+e^{-t}} - \frac{1}{2}\) (shifted sigmoid), respectively, learning rate 0.025 if \(\phi (t) = \tanh (t)\). We sample \(m_f = 10(m_0+ m_1)\) additional points, which is only slightly more than the number of free parameters. Gradient descent is run for 500K iterations (due to small number of variables, this is not time consuming), and only prematurely stopped it, if the iteration stalls. Initially we set \(D_2 = D_3 = \mathsf {Id}_{m_0}\), and all other variables are set to random draws from \(\mathcal{N}(0,0.1)\).

Denoting \(\omega ^* = (D_1^*, D_2^*, D_3^*, w^*, z^*) \in \Omega \) as the gradient descent output, we measure the relative mean squared error (MSE) and the relative \(L_{\infty }\)-error

$$\begin{aligned} \text {MSE} = \frac{\sum _{i=1}^{m_{\text {test}}} (\hat{f}(Z_i;\omega ^*) - f(Z_i))^2}{\sum _{i=1}^{m_{\text {test}}}f(Z_i)^2},\quad \text {E}_{\infty } = \frac{\max _{i \in [m_{\text {test}}]}\left| {\hat{f}(Z_i;\omega ^*) - f(Z_i)}\right| }{\max _{i \in [m_{\text {test}}]}\left| {f(Z_i)}\right| }, \end{aligned}$$

using \(m_{\text {test}} = 50{,}000\) samples \(Z_i \sim \mathcal{N}(0,\mathsf {Id}_{m_0})\). Moreover, we also report the relative bias errors

$$\begin{aligned} E_{\theta } = \frac{\left\| {w^* - \theta }\right\| ^2}{\left\| {\theta }\right\| ^2},\quad E_{\eta } = \frac{\left\| {z^* - \eta }\right\| ^2}{\left\| {\eta }\right\| ^2}, \end{aligned}$$

which indicate if the original bias vectors are recovered. We repeat each experiments 30 times and report averaged values.

Table 2 presents the results of the experiments and shows that we reconstruct a network function that is very close to the original network f in both \(L_2\) and \(L_{\infty }\) norm, and in every scenario. The maximal error is \(\approx 10^{-3}\), which is likely further reducible by increasing the number of gradient descent iterations, or using finer-tuned learning rates or acceleration methods. Therefore, the experiments strongly suggest that we are indeed reconstructing a function that approximates f uniformly well. Inspecting the errors \(E_{\theta }\) and \(E_{\eta }\) also supports this claim, at least in all scenarios where the \(\tanh \) activation is used. In many cases, the relative errors are below \(10^{-7}\), implying that we recover the original bias vectors of the network. Surprisingly, the accuracy of recovered biases slightly drops of few orders of magnitude in the sigmoid case, despite convincing results when measuring predictive performance in \(L_2\) and \(E_{\infty }\). We believe that this is due to faster flattening of the gradients around the stationary point compared to the case of a \(\tanh \) activation function and that it can be improved by using more sophisticated strategies of choosing a gradient descent step size. We also tested (45) when fixing \(D = \mathsf {Id}_{m_0}\) since \(\tanh \) and the shifted sigmoid are odd functions, and thus Remark 28 applies. The results are consistently slightly better than Table 2, but are qualitatively similar.

Table 3 Errors of the reconstructed network using (45) when using approximated \(\hat{A} \approx A\) and \(\hat{V} \approx V\) (up to sign and permutation)

We ran similar experiments for perturbed orthogonal weights and when using \(\hat{A}\) and \(\hat{V}\) precomputed with the methods we described in Sects. 4 and 6.1. The quality of the results varies dependent on whether \(\hat{A} \approx A\) and \(\hat{V} \approx V\) (up to sign and permutation) holds, or a fraction of the weights has not been recovered. To isolate cases where \(\hat{A} \approx A\) and \(\hat{V} \approx V\) holds, we compute averaged MSE and \(L_{\infty }\) over all trials satisfying

$$\begin{aligned} \sum _{i=1}^{m_0}\mathcal{E}(a_i) + \sum _{\ell = 1}^{m_1}\mathcal{E}(v_{\ell }) < 0.5,\quad \text {(see Section 4 for the Definition of } \mathcal{E}). \end{aligned}$$
(46)

We report the averaged errors and the number of trials satisfying this condition in Table 3. It shows that the reconstructed function is close to the original function, even if the weights are only approximately correct. Therefore we conclude that that minimizing (45) provides a very efficient way of learning the remaining network parameters from just few additional samples, once entangled network weights A and V are (approximately) known.