Robust and Resource-Efficient Identification of Two Hidden Layer Neural Networks

We address the structure identification and the uniform approximation of two fully nonlinear layer neural networks of the type f(x)=1Th(BTg(ATx))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f(x)=1^T h(B^T g(A^T x))$$\end{document} on Rd\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb R^d$$\end{document}, where g=(g1,⋯,gm0)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g=(g_1,\dots , g_{m_0})$$\end{document}, h=(h1,⋯,hm1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$h=(h_1,\dots , h_{m_1})$$\end{document}, A=(a1|⋯|am0)∈Rd×m0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A=(a_1|\dots |a_{m_0}) \in \mathbb R^{d \times m_0}$$\end{document} and B=(b1|⋯|bm1)∈Rm0×m1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B=(b_1|\dots |b_{m_1}) \in \mathbb R^{m_0 \times m_1}$$\end{document}, from a small number of query samples. The solution of the case of two hidden layers presented in this paper is crucial as it can be further generalized to deeper neural networks. We approach the problem by sampling actively finite difference approximations to Hessians of the network. Gathering several approximate Hessians allows reliably to approximate the matrix subspace W\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal W$$\end{document} spanned by symmetric tensors a1⊗a1,⋯,am0⊗am0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a_1 \otimes a_1,\dots ,a_{m_0}\otimes a_{m_0}$$\end{document} formed by weights of the first layer together with the entangled symmetric tensors v1⊗v1,⋯,vm1⊗vm1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$v_1 \otimes v_1 ,\dots ,v_{m_1}\otimes v_{m_1}$$\end{document}, formed by suitable combinations of the weights of the first and second layer as vℓ=AG0bℓ/‖AG0bℓ‖2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$v_\ell =A G_0 b_\ell /\Vert A G_0 b_\ell \Vert _2$$\end{document}, ℓ∈[m1]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell \in [m_1]$$\end{document}, for a diagonal matrix G0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_0$$\end{document} depending on the activation functions of the first layer. The identification of the 1-rank symmetric tensors within W\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal W$$\end{document} is then performed by the solution of a robust nonlinear program, maximizing the spectral norm of the competitors constrained over the unit Frobenius sphere. We provide guarantees of stable recovery under a posteriori verifiable conditions. Once the 1-rank symmetric tensors {ai⊗ai,i∈[m0]}∪{vℓ⊗vℓ,ℓ∈[m1]}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{a_i \otimes a_i, i\in [m_0]\}\cup \{v_\ell \otimes v_\ell , \ell \in [m_1] \}$$\end{document} are computed, we address their correct attribution to the first or second layer (ai\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a_i$$\end{document}’s are attributed to the first layer). The attribution to the layers is currently based on a semi-heuristic reasoning, but it shows clear potential of reliable execution. Having the correct attribution of the ai,vℓ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a_i,v_\ell $$\end{document} to the respective layers and the consequent de-parametrization of the network, by using a suitably adapted gradient descent iteration, it is possible to estimate, up to intrinsic symmetries, the shifts of the activations functions of the first layer and compute exactly the matrix G0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_0$$\end{document}. Eventually, from the vectors vℓ=AG0bℓ/‖AG0bℓ‖2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$v_\ell =A G_0 b_\ell /\Vert A G_0 b_\ell \Vert _2$$\end{document}’s and ai\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a_i$$\end{document}’s one can disentangle the weights bℓ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$b_\ell $$\end{document}’s, by simple algebraic manipulations. Our method of identification of the weights of the network is fully constructive, with quantifiable sample complexity and therefore contributes to dwindle the black box nature of the network training phase. We corroborate our theoretical results by extensive numerical experiments, which confirm the effectiveness and feasibility of the proposed algorithmic pipeline.


Introduction
Deep learning is perhaps one of the most sensational scientific and technological developments in the industry of the last years. Despite the spectacular success of deep neural networks (NN) outperforming other pattern recognition methods, achieving even superhuman skills in some domains [12,36,57] and confirmations of empirical successes in other areas such as speech recognition [27], optical character recognition [8], games solution [44,55], the mathematical understanding of the technology of machine learning is in its infancy. This is not only unsatisfactory from a scientific, especially mathematical point of view, but it also means that deep learning currently has the character of a black box method, and its success can not be ensured yet by a full theoretical explanation. This leads to lack of acceptance in many areas, where interpretability is a crucial issue (like security, cf. [10]) or for those applications where one wants to extract new insights from data [60].
Several general mathematical results on neural networks have been available since the 1990s [2,17,38,39,[46][47][48], but deep neural networks have special features and in particular superior properties in applications that still cannot be fully explained from the known results. In recent years, new interesting mathematical insights have been derived for understanding approximation properties (expressivity) [19,53] and stability properties [9,67] of deep neural networks. Several other crucial and challenging questions remain open.
A fundamental one is about the number of required training data to obtain a good neural network, i.e., achieving small generalization errors for future data. Classical statistical learning theory splits this error into bias and variance and gives general estimations by means of the so-called VC dimension or Rademacher complexity of the used class of neural networks [54]. However, the currently available estimates of these parameters [26] provide very pessimistic barriers in comparison to empirical success. In fact, the trade-off between bias and variance is function of the complexity of a network, which should be estimated by the number of sampling points to identify it uniquely. Thus, on the one hand, it is of interest to know which neural networks can be uniquely determined in a stable way by finitely many training points. On the other hand, the unique identifiability is clearly a form of interpretability.
The motivating problem of this paper is the robust and resource-efficient identification of feedforward neural networks. Unfortunately, it is known that identifying a very simple (but general enough) neural network is indeed NP-hard [7,33]. Even without invoking fully connected neural networks, recent work [22,41] showed that even the training of one single neuron (ridge function or single index model) can show any possible degree of intractability, depending on the distribution of the input. Recent results [3,34,42,52,56], on the other hand, are more encouraging and show that minimizing a square loss of a (deep) neural network does not have in general or asymptotically (for large number of neurons) poor local minima, although it may retain the presence of critical saddle points.
In this paper, we present conditions for a fully nonlinear two-layer neural network to be provably identifiable with a number of samples, which is polynomially depending on the dimension of the network. Moreover, we prove that our procedure is robust to perturbations. Our result is clearly of theoretical nature, but also fully constructive and easily implementable. To our knowledge, this work is the first, which allows provable de-parametrization of the problem of deep network identification, beyond the simpler case of shallow (one hidden) layer neural networks already considered in very recent literature [3,23,32,34,42,43,52,56]. For the implementation, we do not require black box high-dimensional optimization methods and no concerns about complex energy loss landscapes need to be addressed, but only classical and relatively simple calculus and linear algebra tools are used (mostly function differentiation and singular value decompositions). The results of this paper build upon the work [22,23], where the approximation from a finite number of sampling points has been already derived for the single neuron and one-layer neural networks. The generalization of the approach of the present paper to networks with more than two hidden layers is surprisingly simpler than one may expect, and it is in the course of finalization [21], see Sect. 5 (v) below for some details.

Notation
Let us collect here some notation used in this paper. Given any integer m ∈ N, we use the symbol [m] := {1, 2, . . . , m} for indicating the index set of the first m integers. We denote by B d 1 the Euclidean unit ball in R d , S d−1 the Euclidean sphere, and μ S d−1 is its uniform probability measure. We denote d q the d-dimensional Euclidean space endowed with the norm x d q = d j=1 |x j | q 1/q . For q = 2, we often write For a matrix M, we denote σ k (M) its k th singular value. We denote S the sphere of symmetric matrices of unit Frobenius norm · F . The spectral norm of a matrix is denoted · . Given a closed convex set C, we denote P C the orthogonal projection operator onto C. (Sometimes we use such operators to project onto subspaces of R d or subspaces of symmetric matrices or onto balls of such spaces.) For vectors x 1 , . . . , x k ∈ R d , we denote the tensor product x 1 ⊗ · · · ⊗ x k as the tensor of entries (x 1i 1 . . . x k i k ) i 1 ,...,i k . For the case of k = 2, the tensor product x ⊗ y of two vectors x, y ∈ R d equals the matrix x y T = (x i y j ) i j . For any matrix M ∈ R m×n vec(M) := (m 11 , m 21 , . . . , m m1 , m 12 , m 22 , . . . , m mn ) T ∈ R mn , is its vectorization, which is the vector created by the stacked columns of M.

Meet the Neuron
The simplest artificial neural network f : ⊂ R d → R is a network consisting of exactly one artificial neuron, which is modeled by a ridge-function (or single-index model) f as where g : R → R is the shifted activation function φ(· + θ) and the vector a ∈ R d expresses the weight of the neuron. Since the beginning of the 1990s [30,31], there is a vast mathematical statistics literature about single-index models, which addresses the problem of approximating a and possibly also g from a finite number of samples of f to yield an expected least-squares approximation of f on a bounded domain ⊂ R d . Now assume for the moment that we can evaluate the network f at any point in its domain, we refer to this setting as active sampling. As we aim at uniform approximations, we adhere here to the language of recent results about the sampling complexity of ridge functions from the approximation theory literature, e.g., [13,22,41]. In those papers, the identification of the neuron is performed by using approximate differentiation. Let us clarify how this method works as it will be of inspiration for the further developments below. For any > 0, points x i , i = 1, . . . m X , and differentiation directions ϕ j , j = 1, . . . m we have Hence, differentiation exposes the weight of a neuron and allows to test it against test vectors ϕ j . The approximate relationship (1) forms for every fixed index i a linear system of dimensions m × d, whose unknown is x * i = g (a T x i )a. Solving approximately and independently, the systems for i = 1, . . . m X yield multiple approximationsâ = x * i / x * i 2 ≈ a of the weight, the most stable of them with respect to the approximation error in (1) is the one for which x * i 2 is maximal. Onceâ ≈ a is learned, then one can easily construct a functionf (x) =ĝ(â T x) by approximatinĝ g(t) ≈ f (ât) on further sampling points. Under assumptions of smoothness of the activation function g ∈ C s ([0, 1]), for s > 1, g (0) = 0 and compressibility of the weight, i.e., a d q is small for 0 < q ≤ 1, then by using L sampling points of the function f and the approach sketched above, one can construct a functionf (x) =ĝ(â T x) such that In particular, the result constructs the approximation of the neuron with an error, which has polynomial rate with respect to the number of samples, depending on the smoothness of the activation function and the compressibility of the weight vector a. The dependence on the input dimension is only logarithmical. To take advantage of the compressibility of the weight, compressive sensing [24] is a key tool to solve the linear systems (1). In [13] , such an approximation result was obtained by active and deterministic choice of the input points x i . In order to relax a bit the usage of active sampling, in the paper [22] a random sampling of the points x i has been proposed and the resulting error estimate would hold with high probability. The assumption g (0) = 0 is somehow crucial, since it was pointed out in [22,41] that any level of tractability (polynomial complexity) and intractability (super-polynomial complexity) of the problem may be exhibited otherwise.

Shallow Networks: The One-Layer Case
Combining several neurons leads to richer function classes [38,39,[46][47][48]. A neural network with one hidden layer and one output is simply a weighted sum of neurons whose activation function only differs by a shift, i.e., where a i ∈ R m and b i , θ i ∈ R for all i = 1, . . . , m. Sometimes, it may be convenient below the more compact writing f (x) = 1 T g(A T x) where g = (g 1 , . . . , g m ) and 1 Differently from the case of the single neuron, the use of first-order differentiation may furnish information about A = span {a 1 , . . . , a m } (active subspace identification [14,15], see also [22,Lemma 2.1]), but it does not allow yet to extract information about the single weights a i . For that higher-order information is needed. Recent work shows that the identification of a network (2) can be related to tensor decompositions [1,23,32,43]. As pointed out in Sect. 1.2.1, differentiation exposes the weights. In fact, one way to relate the network to tensors and tensor decompositions is given by higher-order differentiation. In this case, the tensor takes the form which requires that the g i 's are sufficiently smooth. In a setting where the samples are actively chosen, it is generally possible to approximate these derivatives by finite differences. However, even for passive sampling there are ways to construct similar tensors [23,32], which rely on Stein's lemma [58] or differentiation by parts or weak differentiation. Let us explain how passive sampling in this setting may be used for obtaining tensor representations of the network. If the probability measure of the sampling points x i 's is μ X with known (or approximately known [18]) density p(x) with respect to the Lebesgue measure, i.e., dμ X (x) = p(x)dx, then we can approximate the expected value of higher order derivatives by using exclusively point evatuations of f . This follows from In the work [32], decompositions of third-order symmetric tensors (k = 3) [1,35,51] have been used for the weights identification of one hidden layer neural networks. Instead, beyond the classical results about principal Hessian directions [37], in [23] it is shown that using second derivatives (k = 2) actually suffices and the corresponding error estimates reflect positively the lower order and potential of improved stability, see e.g., [16,28,29]. The main part of the present work is an extension of the latter approach, and therefore, we will give a short summary of it with emphasis on active sampling, which will be assumed in this paper as the sampling method. The first step of the approach in [23] is taking advantage of (3) to reduce the dimensionality of the problem from d to m.
Reduction to the active subspace Before stating the core procedure, we want to introduce a simple and optional method, which can help to reduce the problem complexity in practice. Assume f : ⊂ R d → R takes the form (2), where d ≥ m and that a 1 , . . . , a m ∈ R d are linearly independent. From a numerical perspective the input dimension d of the network plays a relevant role in terms of complexity of the procedure. For this reason in [23], the input dimension is effectively reduced to the number of neurons in the first hidden layer. With this reasoning, in the sections that follow we also consider networks where the input dimension matches the number of neurons of the first hidden layer.
Assume for the moment that the active subspace A = span {a 1 , . . . , a m } is known. Let us choose any orthonormal basis of A and arrange it as the columns of a matrix A ∈ R d×m . Then which can be used to define a new network whose weights are α 1 =Â T a 1 , . . . , α m =Â T a m ; all the other parameters remain unchanged. Note thatÂα i = P A a i = a i , and therefore, a i can be recovered from α i . In summary, if the active subspace of f is approximately known, then we can constructf , such that the identification of f andf are equivalent. This allows us to reduce the problem to the identification off instead of f , under the condition Algorithm 1: Active subspace identification [23] Input: Given a shallow neural network f as in (2), step-size of finite differences > 0, number of samples m X 1 begin 2 Draw x 1 , . . . , x m X uniformly from the unit sphere S d−1 3 Calculate the estimated gradients f (x 1 ), . . . , f (x m X ) by first-order finite differences with stepsize . 4 Compute the singular value decomposition has full rank, i.e., its m-th singular value fulfills Then , where C 1 , C 2 > 0 are absolute constants depending on the smoothness of g i 's.
Identifying the weights As clarified in the previous section, we can assume from now on that d = m without loss of generality. Let f be a network of the type (2), with twice differentiable activation functions (g i ) i=1,...,m , and independent weights ..m ∈ R m of unit norm. Then f has second derivative whose expression represents a nonorthogonal rank-1 decomposition of the Hessian. The idea is, first of all, to modify the network by an ad hoc linear transformation (withening) of the input in such a way that (W a i / W a i 2 ) i=1,...,m forms an orthonormal system. The computation of W can be performed by spectral decomposition of any positive definite matrix In fact, from the spectral decomposition of G = U DU T , we define W = D − 1 2 U T (see [23,Theorem 3.7]). This procedure is called whitening and allows to reduce the problem to networks with nearly-orthogonal weights, and presupposes to have obtainedÂ ≈ A = span {a 1 ⊗ a 1 , . . . , a m ⊗ a m }. By using (5) and a similar approach as Algorithm 1 (one simply substitutes there the approximate gradients with vectorized approximate Hessians), one can computeÂ under the assumption that also the secondorder matrix is of full rank, where vec(∇ 2 f (x)) is the vectorization of the Hessian ∇ 2 f (x).
After whitening one could assume without loss of generality that the vectors (a i ) i=1,...m ∈ R m are nearly orthonormal in the first place. Hence the representation (5) would be a near spectral decomposition of the Hessian and the components a i ⊗ a i would represent the approximate eigenvectors. However, the numerical stability of spectral decompositions is ensured only under spectral gaps [6,50]. In order to maximally stabilize the approximation of the a i 's, one seeks for matrices M ∈Â with the maximal spectral gap between the first and second largest eigenvalues. This is achieved by the maximizers of the following nonconvex program where · and · F are the spectral and Frobenius norms, respectively. This program can be solved by a suitable projected gradient ascent, see for instance [ with probability at least 1 − m exp − m X c 2 max{C 1 ,C 2 } 2 m 2 , for a suitable constant c > 0 intervening (together with some fixed power of m) in the asymptotical constant of the approximation (7). Moreover, once the weights are retrieved one constructs an approximating functionf : While this result have been generalized to the case of passive sampling in [23] and through whitening allows for the identification of nonorthogonal weights, it is restricted to the case of m ≤ d and linearly independent weights {a i : i = 1, . . . , m}.
The main goal of this paper is generalizing this approach to account for both the identification of two fully nonlinear hidden layer neural networks and the case where m > d and the weights are not necessarily nearly orthogonal or even linearly independent (see Remark 10 below).

Deeper Networks: The Two-Layer Case
What follows further extends the theory discussed in the previous sections to a wider class of functions, namely neural networks with two hidden layers. By doing so, we will also address a relevant open problem that was stated in [23], which deals with the identification of shallow neural networks where the number of neurons is larger than the input dimension. First, we need a precise definition of the architecture of the neural networks we intend to consider. and let g 1 , . . . , g m 0 and h 1 , . . . , h m 1 be univariate functions.
. . , g m 0 (0) , and assume the following: (A3) the derivatives of g i and h are uniformly bounded according to Then we define a set of two-layer networks by where b i is the (i, )-th entry of B, respectively, the i-th entry of vector b .
Sometimes it may be convenient to use the more compact writing In the previous section, we presented a dimension reduction that can be applied to one layer neural networks and which can be useful to reduce the dimensionality from the input dimension to the number of neurons of the first layer. The same approach can be applied to networks defined by the class F(d, m 0 , m 1 ). For the approximation error of the active subspace, we end up with the following corollary of Theorem 1.
with probability at least 1 − m 0 exp(− s 2 m x α 2C 4 m 1 ) and constants C 3 , C 4 > 0 that depend only on κ j , η j for j = 0, . . . , 3. In view of Corollary 4, we can again apply [23, Theorem 1.1] and assume, without loss of generality, that d = m 0 , for which frame condition (9) automatically implies invertibility of A, as the vectors v 's are linear combinations of the a i 's.

Approximating the Span of Tensors of Weights
In the one layer case, which was described earlier, the unique identification of the weights is made possible by constructing a matrix space whose rank-1 basis elements are outer products of the weight profiles of the network. This section illustrates the extension of this approach beyond shallow neural networks. Once again, we will make use of differentiation, and overall there will be many parallels to the approach in [23]. However, the intuition behind the matrix space will be less straightforward, because we cannot anymore directly express the second derivative of a two-layer network as a linear combination of symmetric rank-1 matrices. This is due to the fact that the Hessian matrix of a network f ∈ F(m 0 , m 0 , m 1 ) has the form Therefore, ∇ 2 f (x) ∈ span {a i ⊗ a j + a j ⊗ a i | i, j = 1, . . . , m 0 }, which has dimension m 0 (m 0 +1) 2 and is in general not spanned by symmetric rank-1 matrices. This expression is indeed quite complicated, due to the chain rule and the mixed tensor contributions, which are consequently appearing. At a first look, it would seem impossible to use a similar approach as the one for shallow neural networks recalled in the previous section. Nevertheless a relatively simple algebraic manipulation allows to recognize some useful structure: For a fixed x ∈ R m 0 , we rearrange the expression as Fig. 1 Illustration of the relationship between W (black line) and span ∇ 2 f (x) x ∈ R m 0 (light blue region) given by two nonlinear cones that fan out from ∇ 2 f (0). There is no reason to believe that the these cones are symmetric around W. The gray cones show the maximal deviation ofŴ from W which is a combination of symmetric rank-1 matrices since m 0 j=1 b j g j (a T j x)a j ∈ R m 0 . We write the latter expression more compactly by introducing the notation where Let us now introduce the fundamental matrix space where the span is taken over R, a 1 , . . . , a m 0 are the weight profiles of the first layer, and encode "entangled" information between A and B. For this reason, we call the v 's entangled weights. Let us stress at this point that the definition and the constructive approximation of the space W is perhaps the most crucial and relevant contribution of this paper. In fact, by inspecting carefully the expression (10), we immediately notice that ∇ 2 f (0) ∈ W, and also that the first sum in (10), namely m 0 i=1 β i (x)a i ⊗ a i , lies in W for all x ∈ R m 0 . Moreover, for arbitrary sampling points x, deviations of ∇ 2 f (x) from W are only due to the second term in (10). The intuition is that for suitably centered distributions of sampling points Fig. 1 for a two dimensional sketch of the geometrical situation. Hence, we would attempt an approximation of W by PCA of a collection of such approximate Hessians. Practically, by active sampling (targeted evaluations of the network f ) we first construct estimates ), at sampling points x 1 , . . . , x m X ∈ R m drawn independently from a suitable distribution μ X . Next, we define the matrix The whole procedure of calculatingŴ is given in Algorithm 2. It should be clear that the choice of μ X plays a crucial role for the quality of this method. In the analysis that follows, we focus on distributions that are centered and concentrated. Figure 1 helps to form a better geometrical intuition of the result of the procedure. It shows the region covered by the Hessians, indicated by the light blue area, which envelopes the space W in a sort of nonlinear/nonconvex cone originating from ∇ 2 f (0). In general, the Hessians do not concentrate around W in a symmetric way, which means that the "center of mass" of the Hessians can never be perfectly aligned with the space W, regardless of the number of samples. In this analogy, the center of mass is equivalent to the space estimated by Algorithm 2, which essentially is a noncentered principal component analysis of observed Hessian matrices. The primary result of this section is Theorem 5, which provides an estimate of the approximation error of Algorithm 2 depending on the sub-Gaussian norm of the sample distribution μ X and the number of neurons in the respective layers. More precisely, this result gives a precise worst case estimate of the error caused by the imbalance of mass. For reasons mentioned above, the error does not necessarily vanish with an increasing number of samples, but the probability under which the statement holds will tend to 1. In Fig. 1, the estimated region is illustrated by the gray cones that show the maximal, worst-case deviation of W. One crucial condition for Theorem 5 to hold is that there exists an α > 0 such that This assumption ensures that the space spanned by the observed Hessians has, in expectation, at least dimension m 0 + m 1 . Aside from this technical aspect this condition implicitly helps to avoid network configurations, which are reducible, for certain weights can not be recovered. For example, we can define a network in F(2, 2, 1) with

end
Output: PŴ weights given by It is easy to see that a 2 will never be used during a forward pass through the network, which makes it impossible to recover a 2 from the output of the network.
In the theorem below and in the proofs that follow, we will make use of the sub-Gaussian norm · ψ 2 of a random variable. This quantity measures how fast the tails of a distribution decay and such a decay plays an important role in several concentration inequalities. More in general, for p ≥ 1, the ψ p -norm of a scalar random variable Z is defined as For a random vector X on R d the ψ p -norm is given by The random variables for which X ψ 1 < ∞ are called subexponential and those for which X ψ 2 < ∞ are called sub-Gaussian. More in general, the Orlicz space L ψ p = L ψ p ( , , P) consists of all real random variables X on the probabillity space ( , , P) with finite X ψ p norm and its elements are called p-subexponential random variables. Below, we mainly focus on sub-Gaussian random variables. In particular, every bounded random variable is sub-Gaussian, which covers all the cases we discuss in this work. We refer to [65] for more details. One example of a sub-Gaussian distribution is the uniform distribution on the unit sphere, X ∼ Unif(S d−1 ), which has sub-Gaussian norm Theorem 5 Let f ∈ F(m 0 , m 0 , m 1 ) be a neural network within the class described in Definition 3 and consider the space W as defined in (14). Assume that μ X is a probability measure with supp (μ X ) ⊂ B m 0 1 , EX = 0, and that there exists an α > 0 such that Then, for any > 0, Algorithm 2 returns a projection PŴ that fulfills for a suitable subspace W * ⊂ W (we can actually assume that W * = W according to Remark 6 below) with probability at least where c > 0 is an absolute constant and C, C 1 , C > 0 are constants depending on the constants κ j , η j for j = 0, . . . , 3.

Remark 6
If > 0 is sufficiently small, due to (15) the spaceŴ returned by Algorithm 2 has dimension m 0 + m 1 . If the error bound (16) in Theorem 5 is such that P W * − PŴ F < 1, thenŴ and W * must have the same dimension. Moreover, W * ⊂ W and dim(W) = m 0 + m 1 would necessarily imply that W = W * . Hence, for P W * − PŴ F < 1 and > 0 sufficiently small, we have W * = W.
As already mentioned above, for In this case, the error bound 16 behaves like which is small for > 0 small and m 0 m 1 . The latter condition seems favoring networks, for which the inner layer has a significantly larger number of neurons than the outer layer. This expectation is actually observed numerically, see Sect. 4. We have to add, though, that the parameter α > 0 that intervenes in the error bound (16) might also depend on m 0 , m 1 (as it is in fact an estimate of an (m 0 + m 1 ) th singular value as in (15)). Hence, the dependency on the network dimensions is likely more complex and depends on the interplay between the input distribution μ X and the network architecture. In fact, at least judging from our numerical experiments, the error bound (16) is rather pessimistic, and it certainly describes a worst case analysis. One more reason might be that some crucial estimates in its proof could be significantly improved. Another reason could be the rather great generality of the activation functions of the networks, which we analyze in this paper, as described in Definition 3. Perhaps the specific instances used in the numerical experiments are enjoying better identification properties.

Estimating Hessians of the Network by Finite Differences
Before addressing the proof of Theorem 5, we give a precise definition of the finite differences we are using to approximate the Hessian matrices. Denote the i-th Euclidean canonical basis vector in R d by e i and the second-order finite difference approximation of for i, j = 1, . . . , d = m 0 and a step-size > 0.
is constructed as in (17) for some > 0. Then we have For the proof of Lemma 7, we simply use the Lipschitz continuity of the functions g, h and of their derivatives, and make use of a 2 , b 2 ≤ 1. The details can be found in the Appendix (Sect. 1).

Span of Tensors of (Entangled) Network Weights: Proof of Theorem 5
The proof can essentially be divided into two separate bounds. Both will be addressed separately with the two lemmas below. For both lemmas, we will assume that X 1 , . . . , X m X ∼ μ X independently and that supp (μ X ) ⊆ B m 0 1 . Additionally, we define the random matrices where P W denotes the orthogonal projection onto W (cf. (14)). For reader's convenience, we recall here from (10) that the Hessian matrix of f ∈ F(m 0 , m 0 , m 1 ) can be expressed as where γ i (x), τ (x), and v (x) are introduced in (11)- (13). We further simplify this expression by introducing the notations which allow us to rewrite (10) in terms of matrix multiplications Lemma 8 Let f ∈ F(m 0 , m 0 , m 1 ) and letŴ , W * be defined as in (19)- (20), where μ X satisfies supp (μ X ) ⊆ B m 0 1 and has sub-Gaussian norm X ψ 2 . Then the bound holds with probability at least 1 − 2 exp (−cm 1 m X ), where c > 0 is an absolute constant and C, C > 0 depend only on the constants κ j , η j for j = 0, . . . , 3.
Proof By triangle inequality, we get For the first term on the right-hand side, we can use the worst-case estimate from Lemma 7 to get for some constant C > 0. The second term in (21) can be bounded by (the explanation of the individual identities and estimates follows immediately below) In the first two equalities, we made use of the fact that A x A T ∈ W and that by definition of an orthogonal projection . The remaining inequalities follow directly from the submultiplicativity of · F and · combined with the Lipschitz continuity of the activation functions and their derivatives (cf. 3 in Definition 3). Since a j ≤ 1, we can estimate the sub-exponential norm of for an absolute constant c 1 > 0, where we applied [64, Lemma 2.2.2] in the first inequality and used that Y 2 for any scalar random variable Y together with the fact that the sub-Gaussian norm of a vector is defined by [65]). The random vectors X i ∼ μ X are i.i.d., which allows us to drop the dependency on i in the last step. The previous bound also guarantees a bound on the expectation, which is due to Denote Therefore, applying the Bernstein inequality for sub-exponential random variables [65, Theorem 2.8.1] to the right sum in (24) yields with probability at least for all t ≥ 0 and an absolute constant c > 0. Then, by choosing t = c 1 m X m 1 log(m 0 + 1) X 2 ψ 2 , we get with probability at least From (21), combining (22) and (25) yields where we used Lemma 7 in the second inequality, and the result holds at least with the probability given as in (26). Setting C := √ 8c 1 κ 1 κ 2 η 2 > 0 finishes the proof.

Lemma 9
Let μ X be centered with supp (μ X ) ⊆ B m 0 1 . Furthermore, assume that f ∈ F(m 0 , m 0 , m 1 ) and thatŴ is given by (19) with step-size > 0. If Proof By Weyl's inequality and re-using (22), we obtain For the first term of the right-hand side, we have which can be written as a sum of the outer products of the columns Additionally, the matrices vec(∇ 2 f (X i )) ⊗ vec(∇ 2 f (X i )) are independent and positive definite random matrices. The Chernoff bound for the eigenvalues for sums of random matrices, due to Gittens and Tropp [25], applied to the right-hand side of the last equation yields the following lower bound: with probability at least . To estimate K more explicitly, we first have to bound the norm of the Hessian matrices. Let X ∼ μ X , then for some constant C 1 > 0. Now we can further estimate K by Finally, we can finish the proof by plugging the above into (27) and by setting t = 1 2 .

Proof of Theorem 5
The proof is a combination of the previous lemmas together with an application of Wedin's bound [59,66]. GivenŴ , W * , letÛˆ V T , U * * V * T be their respective singular value decompositions. Furthermore, denote byÛ 1 , U * 1 the matrices formed by only the first m 0 + m 1 columns ofÛ , U * , respectively. According to this notation, Algorithm 2 returns the orthogonal projection PŴ =Û 1Û T 1 . We also denote by P W * the projection given by Then we can bound the difference of the projections by applying Wedin's bound Since W has dimension m 0 + m 1 , we have max k≥m 0 +m 1 +1 σ k (W * ) = 0. Therefore the second inequality is equivalent to the first, and we can chooseᾱ = σ m 0 +m 1 (Ŵ ) ≤ min 1≤ j≤m 0 +m 1 σ j (Ŵ ). Thus, we end up with the inequality Applying the union bound for the two events in Lemma 8 and Lemma 9 in combination with the respective inequalities yields are the constants from the lemmas above.

Recovery of Individual (Entangled) Neural Network Weights
The symmetric rank-1 matrices of tensors of (entangled) neural network weights are the spanning elements of W, which in turn can be approximated byŴ as has been proved above. In this section, we explain under which conditions it is possible to stably identify approximations to the network profiles {a i : i ∈ [m 0 ]} ∪ {v : ∈ [m 1 ]} by a suitable selection process, Algorithm 3.
To simplify notation, we drop the differentation between weights a i and v and simply denote W = span {w 1 ⊗ w 1 , . . . , w m ⊗ w m }, where m = m 0 + m 1 , and every w equals either one of the a i 's or one of the v 's. Thus, m may be larger than d. We also use the notations W j := w j ⊗ w j , andŴ j := PŴ (W j ). Provided that the approximation error δ := P W − PŴ F satisfies δ < 1 (cf. Theorem 5), {Ŵ j : j ∈ [m]} is the image of a basis under a bijective map and thus can be used as a basis forŴ (see Lemma 32 in the Appendix). We quantify the deviation from orthonormality by ν := C F −1, see (9). As an example of suitable frames, normalized tight frames achieve the bounds c f = C F = m/d [4, Theorem 3.1], see also [11]. For instance, for such frames m = 1.2d > d would allow for ν = 0.2. These finite frames are related to the Thomson problem of spherical equidistribution, which involves finding the optimal way in which to place m points on the sphere S d−1 in R d so that the points are as far away from each other as possible. We further note that if 0 < ν < 1 then {W j : j ∈ [m]} is a system of linearly independent matrices, and therefore a Riesz basis (see Lemma 29 and (47) in the Appendix). We denote the corresponding lower and upper Riesz constants by c r , C R .
Finally, for any real, symmetric matrix X , we let In the following, we are able to provide in Theorem 11 general recovery guarantees of network weights provided by the eigenvector associated to the largest eigenvalue in absolute value of any suitable matrix M ∈Ŵ ∩ S.

Remark 10
The problem considered in this section is how to approximate the individual w ⊗ w within the space W or more precisely by using its approximationŴ. As the analysis below is completely unaware of how the spaceŴ has been constructed, in particular it does not rely on the fact that it comes from second-order differentiation of a two hidden layer network, here we are actually implicitly able of addressing also the problem of the identification of weights for one hidden layer networks (2) with a number m of neurons larger than the input dimension d, which was left as an open problem from [23].

Recovery Guarantees
The network profiles {w j , j ∈ [m]} are (up to sign) uniquely defined by matrices {W j : j ∈ [m]} as they are precisely the eigenvectors corresponding to the unique nonzero eigenvalue. Therefore, it suffices to recover {W j : j ∈ [m]}, and we have to study when such matrices can be uniquely characterized within the matrix space W by their rank-1 property. Let us stress that this problem is strongly related to similar and very relevant ones appearing recently in the literature addressing nonconvex programs to identify sparse vectors and low-rank matrices in linear subspaces, see, e.g., in [45,49]. In Appendix 2 (Lemma 30 and Corollary 31), we prove that unique identification is possible if any subset of m/2 + 1 vectors of {w j : j ∈ [m]} is linearly independent and that such subset linear independence is actually implied by the frame bounds (9) if ν = C F − 1 < m 2 −1 . Unfortunately, this assumption seems a bit too restrictive in our scenario; hence, we instead resort to a weaker and robust version given by the following result. In particular, we prove that any near rank-1 matrix inŴ of unit Frobenius norm is not too far from one of the W j 's, provided that δ and ν are small.
Before proving Theorem 11, we need the following technical result. Lemma 32, and thus, we get a contradiction by Proof of Theorem 11 Let λ 1 := λ 1 (M), u 1 := u 1 (M) for short in this proof. We can represent M in terms of the basis elements ofŴ as M = m j=1 σ jŴ j , and let Z ∈ W satisfy M = PŴ (Z ). Furthermore, let σ j * = max j σ j ≥ 0 where the nonnegativity follows from Lemma 12. Using Z = m j=1 σ j w j ⊗ w j and Z F ≤ (1 − δ) −1 , we first notice that and To bound the numerator, we first use and then bound the first term using λ 1 − σ j * ≤ 2δ + 2ν and the frame property (9) by Combining these estimates with (30) and δ/(1 − δ) ≤ 2δ, we obtain The result follows since {w j ⊗ w j : j ∈ [m]} is a Riesz basis and thus σ 2 ≤ c The preceding result provides recovery guarantees for network weights provided by the eigenvector associated to the largest eigenvalue in absolute value of any suitable matrix M ∈Ŵ ∩ S. The estimate is inversely proportional to the spectral gap λ 1 (M)−λ 2 (M). The problem then becomes the constructive identification of matrices M belonging toŴ ∩ S, which simultaneously maximize the spectral gap. Inspired by the results in [23], we propose to consider the following nonconvex program as selector of such matrices By maximizing the spectral norm under a Frobenius norm constraint, a local maximizer of the program should be as nearly rank one as possible within a given neighborhood. Moreover, if rank one matrices exist inŴ, these are precisely the global optimizers.

A Nonlinear Program: Properties of Local Maximizers of (31)
In this section, we prove that, except for spurious cases, local maximizers of (31) are generically almost rank-1 matrices inŴ. In particular, we show that local maximizers either satisfy M 2 ≥ 1 − cδ − c ν, for some small constants c, c , implying near minimal rankness, or M 2 ≤ cδ + c ν, i.e., all eigenvalues of M are small, the mentioned spurious cases. Before addressing these estimates, we provide a characterization of the first-and second-order optimality conditions for (31), see [23] and also [61,62].
for all X ∈Ŵ. A stationary point M (in the sense that M fulfills (32)) is a local maximizer of (31) if and only if for all X ∈Ŵ Proof The statement requires minor modification of [23,Theorem 3.4] and the proof follows along analogous lines. For reader's convenience, we give self-contained proof of the statement below, with some key computations borrowed from [23]. For simplicity, we drop the argument M in λ i , u i , and without loss of generality we assume λ i * = M , otherwise we consider −M. Following the analysis in [23], for X ∈Ŵ ∩ S we can consider the function because M is a local maximizer if and only if α = 0 is a local maximizer of f X for all X ∈Ŵ ∩ S. Let us consider X ∈Ŵ ∩ S with X ⊥ M first. We note that the simplicity of λ i * implies that there exist analytic functions λ i * (α) and u i * (α) with (M + α X )u i * (α) = λ i * (α)u i * (α) for all α in a neighborhood around 0 [40,50]. Therefore we can use a Differentiating once, we get f X (0) = λ i * (0); hence, α = 0 is a stationary point if and only if λ i * (0) vanishes. Following the computations in [23], we find that λ i * (0) = u i * (0) T Xu i * (0) = 0, and thus, (32) follows for any X ⊥ M. For general X , we split X = X , M M + X ⊥ , and get u i For (33), we have to check additionally f X (α) ≤ 0. The second derivative of f X (α) at zero is given by f X (0) = λ i * (0) − λ i * (0); hence, the condition for attaining a local maximum is λ i * (0) ≤ λ i * (0). Again, we can follow the computations in [23] to obtain and (33) follows immediately for any X ⊥ M, X F = 1. For general X , we decom- To derive the bounds for λ 1 , we establish an inequality 0 ≤ λ 2 1 (λ 2 1 − 1) + cδ + c ν, which implies that λ 2 1 (M) is either close to 0 or close to 1. A first ingredient for obtaining the inequality is where we used Ŵ j u 1 2 − u T 1Ŵ j u 1 ≤ 2δ in the inequality, see Lemma 33 in Appendix 2, and (32) in the equality. The other useful technical estimate is provided in the following Lemma, which is proven by leveraging the second order optimality condition (33).

Lemma 14
Assume that M is a local maximizer satisfying (A1) and (A2) and let max{δ, ν} < 1/4. For any X ∈Ŵ with X F ≤ 1 we have For the proof of Lemma 14, we need a lower bound for the smallest eigenvalue (see Appendix 2 for the proof of Lemma 15).

Lemma 15
Assume that M is a stationary point of (31) satisfying (A1) and (A2). If Proof of Lemma 14 We first use (32) and (33) to get and then rearrange the inequality to obtain Using the lower bound for λ D from Lemma 15, and λ 1 ≤ 1, we get By combining (34) and (35), the bounds for λ 1 follow.
In Sect. 3.2, we analyze local maximizers of (31) and show that there exist small constants c, c such that either M 2 ≥ 1 − cδ + c ν, or M 2 ≤ cδ + c ν. Therefore, a local maximizer of (31) is either almost rank-1 or it has its energy distributed across many eigenvalues. This criterion can be easily checked in practice, and therefore maximizing (31) is a suitable approach for finding near rank-1 matrices inŴ. In this section, we show how those individual symmetric rank-1 tensors can be approximated by a simple iterative algorithm, Algorithm 3, making exclusive use of the projection PŴ . Algorithm 3 strives to solve the nonconvex program (31), by iteratively increasing the spectral norm of its iterations. Our approach is closely related to the projected gradient ascent iteration [23, Algorithm 4.1], but we introduce some modifications, in particular we exchange the order of the normalization and the projection ontoŴ. The proof of convergence of [23, Algorithm 4.1] takes advantage of that different ordering of these operations to address the case where W is spanned by at most m ≤ d rank-1 matrices formed as tensors of nearly orthonormal vectors (after whitening). In fact, its analysis is heavily based on approximated singular value or spectral decompositions. Unfortunately in our case, the decomposition M = m j=1 σ j w j ⊗ w j does not approximate the singular value or spectral decomposition since the w j 's are redundant (they form a frame) and therefore are not properly nearly orthonormal in the sense required in [23].
Algorithm 3 is based on the iterative application of the operator F γ defined by with γ > 0 and P S as the projection onto the sphere S = {X : X F = 1}. The following Lemma shows that, if λ 1 (X ) > 0, the operator F γ is well-defined, in the sense that it is a single-valued operator.

F . In particular, F γ (X ) is welldefined and can be explicitly expressed as
Proof The result follows from X , PŴ (u 1 (X ) ⊗ u 1 (X )) = λ 1 (X ) and computing explicitly the squared norm PŴ (X + γ u 1 (X ) ⊗ u 1 (X )) 2 F . We analyze next the sequence (M j ) j∈N generated by Algorithm 3. We show that (λ 1 (M j )) j∈N is a strictly monotone increasing sequence, converging to a well-defined limit λ ∞ = lim j→∞ λ 1 (M j ), and, if λ 1 (M j ) > 1/ √ 2 for some j, all convergent subsequences of (M j ) j∈N converge to fixed points of F γ . Moreover, we prove that such fixed points satisfy (32) and are thus stationary points of (31). We begin by providing two equivalent characterizations of (32).

Lemma 18 For M ∈Ŵ and c
Proof Assume that v T X v = c X , M for all X . We notice that the assumption is equivalent to X , v ⊗ v − cM = 0 for all X ∈Ŵ. Therefore PŴ (v ⊗ v − cM) = 0, and the result follows from M ∈Ŵ. In the case where Proof We drop the argument X for λ j (X ) and u j (X ) for simplicity. We first calculate Moreover, we have equality if and only if PŴ (u j ⊗ u j ) F = λ j , hence (38) is actually a chain of equalities. Specifically, Lemmas 18 and 19 show that the stationary point condition (32) for M with M = |λ i * (M)| and isolated λ i * is equivalent to both A similar condition appears naturally if we characterize the fixed points of F γ .
The preceding Lemma implies the convergence of (λ 1 (M j )) j∈N by monotonicity. Moreover, we can also use such convergence to establish M j+1 − M j F → 0.
and U j F ≥ λ j ≥ λ 0 for all j. Define the shorthand j := U j F − λ j . We will now show that M j+1 − M j F ≤ C j for some constant C. First notice that Therefore there exists a matrix E j with M j = λ −1 j U j + E j and E j ≤ λ −1 0 2 j . Furthermore, by the triangle inequality we have hence it remains to bound the first term. Using M j = λ −1 j U j + E j and M j+1 = Since M j + γ U j F ≥ 1 according to Lemma 17, M j+1 − M j → 0 follows.
It remains to show that convergent subsequences of (M j ) j∈N converge to fixed points of F γ . Then by (40), Lemmas 18 and 19, fixed points satisfy the first-order optimality condition (32) and are stationary points of (31). To prove convergence of subsequences to fixed points, we require continuity of F γ . The following Lemma shows that F γ is continuous for matrices X satisfying λ 1 (X ) > 1/ √ 2, i.e., if the largest eigenvector is isolated and u 1 (X ) is a continuous function of X . Proof F γ (X ) ∈ M follows directly from Lemma 20, i.e., from the fact that the largest eigenvalue is only increased by applying F γ . For the continuity, consider X , Y ∈ M . We first note that by using [6, Theorem 7.3.1] and λ i (Y ) ≤ √ 1/2 − ε for i = 2, . . . , m 0 we get Furthermore, we have X + γ PŴ (u 1 (X ) ⊗ u 1 (X )) 2 F ≥ 1 according to Lemma 17, and therefore, P S acts on X + γ PŴ (u 1 (X ) ⊗ u 1 (X )) and Y + γ PŴ (u 1 (Y ) ⊗ u 1 (Y )) as a projection onto the convex set {X : X F ≤ 1}. Therefore it acts as a contraction and the result follows from The convergence to fixed points of any subsequence of (M j ) j∈N now follows as a corollary of Lemma 34 in the Appendix.

Remark 24
The analysis of the convergence of Algorithm 3 we provide above does not use the structure of the space W and it focuses exclusively on the behavior of the first eigenvalue λ 1 . As a consequence, it does guarantee that its iterations have monotonically increasing spectral norm and that they generically converges to stationary points of (31). However, it does not ensure convergence to nonspurious, minimal rank local minimizers of (31). In the numerical experiments of Sect. 4, where {w j : j ∈ [m]} are sampled randomly from certain distributions, an overwhelming majority of sequences (M j ) j∈N converges to a near rank-1 matrix with an eigenvalue close to one, whose corresponding eigenvector approximates a network profile with good accuracy. To explain this success, we would need a finer and quantitative analysis of the increase of the spectral norm during the iterations, for instance by quantifying the gap by means of a suitable constant 0 < < 1. As clarified in the proof of Lemma 19, the smaller the constant > 0 is, the larger is the increase of the spectral norm M j+1 > M j between iterations of Algorithm 3. The following result is an attempt to gain a quantitative estimate for by injecting more information about the structure of the space W.
In order to simplify the analysis, let us assume δ = 0 orŴ = W.
Denote {W : ∈ [m]} the canonical dual frame so that for any symmetric matrix X . Then, for X ∈ W and the notation λ j := λ j (X ), λ 1 = X and u j := u j (X ), we have Proof Let us fix X ∈ W. Then we have two ways of representing X , its frame decomposition and its spectral decomposition: By using both the decompositions and again the notation W = w ⊗ w , we obtain F (canonical dual frame upper bound), and using Cauchy-Schwarz inequality, we can further estimate where in the last inequality we applied the estimates The meaning of estimate (42) is explained by the following mechanism: Whenever the deviation of an iteration M j of Algorithm 3 from being a rank-1 matrix in W is large, in the sense that P W (u 1 ⊗ u 1 ) F is small, the constant = is also small and the iteration M j+1 = F γ (M j ) will efficiently increase the spectral norm. The gain will reduce as soon as the iteration M j gets closer and closer to a rank-1 matrix. It would be perhaps possible to get an even more precise analysis of the behavior of Algorithm 3, by considering simultaneously the dynamics of (the gaps between) different eigenvalues (not only focusing on λ 1 ). Unfortunately, we could not find yet a proper and conclusive argument. We perform experiments for different scenarios, where either the activation function or the construction of the network weights varies. Guided by our theoretical results, we pay particular attention to how the network architecture, e.g., m 0 and m 1 , influences the simulation results. The entire procedure is rather flexible and can be adjusted in different ways, e.g., changing the distribution μ X . To provide a fair account of the success, we fix hyperparameters of the approach throughout all experiments. Test scenarios, hyperparameters, and error measures are reported below in more detail. Afterwards, we present and discuss the results. 1+e −t − 1 2 (shifted sigmoid function), and sample offsets (called also biases) θ i , τ independently at random from N (0, 0.01).

Scenarios and construction of the networks
As made clear by our theory, see Theorem 16, a sufficient condition for successful recovery of the entangled weights is ν = C F − 1 to be small, where C F is the upper frame constant of the entangled weights as in Definition 3. In the following numerical experiments, we wish to verify how crucial is this requirement. Thus, we test two different scenarios for the weights. The first scenario, which is designed to best fulfill the sufficient condition ν ≈ 0, models both {a i : i ∈ [m 0 ]} and {b : ∈ [m 1 ]} as perturbed orthogonal systems. For their construction, we first sample orthogonal bases uniformly at random and then apply a random perturbation. The perturbation is such that singular values of A and B. In the second case, we sample the (entangled) weights independently from Uni(S m 0 −1 ). In this situation, as the dimensionality d = m 0 is relatively small, the system will likely not fulfill well the condition ν ≈ 0; however, as the dimension d = m 0 is chosen larger, the weights tend to be more incoherent and gradually approaching the previous scenario.
Hyperparameters Unless stated differently, we sample m X = 1000 Hessian locations from μ X = √ m 0 Uni S m 0 −1 , and use = 10 −5 in the finite difference approximation (17). We generate 1000 random matrices The results of the study are presented in Figs. 2 and 3 and show that our procedure typically recovers many of the network weights, while suffering only few false positives. Considering for example a sigmoidal network, we have almost perfect recovery of the weights in both layers at a threshold of T = 0.05 for any network architecture, see Fig. 3a, c. For a tanh-network, the performance is slightly worse, but we still Inspecting the plots more closely, we can notice some shared trends and differences between sigmoid and tanh networks. In both cases, the performance improves when increasing the input dimensionality or, equivalently, the number of neurons in the first layer, even though the number of weights that need to be recovered increases accordingly. This is particularly the case for tanh-networks as visualized in Fig. 3b, d and is most likely caused by reduced correlation of the weights in higher dimensions. As previously mentioned, the correlation is encoded within the constant ν = C F − 1 used in the analysis in Sect. 3.
For fixed m 0 on the other hand, different activation functions react differently to changes of m 1 . For m 1 larger, considering a sigmoid network, the projection error increases, and the recovery of weights in the second layer worsens as shown in Fig. 2a, c. This is expected by Theorem 5. Inspecting the results for tanh networks, the projection error actually decreases when increasing m 1 , see Fig. 2b, and the recovery performance gets better. Figure 3d shows that especially weights in the first layer are more easily recovered if m 1 is large, such that the case m 0 = 45, m 1 = 23 allows for perfect recovery at a threshold T = 0.05. This behavior cannot be fully explained by our general theory, e.g., Theorem 5.
Results for random weights from the unit sphere When sampling the weights independently from the unit sphere, the recovery problem seems more challenging for moderate dimension d = m 0 and for both activation functions. This confirms the expectation that the smallness of ν = C F − 1 is somehow crucial. Figure 4c, d suggest that especially recovering the weights of the second layer is more difficult than in the perturbed orthogonal case. Still, we achieve good performance in many cases. For sigmoid networks, Fig. 4c shows that we always recover most weights in the first layer, and a large portion of weights in the second layer if m 1 /m 0 is small. Moreover, keeping m 1 /m 0 constant while increasing m 0 improves the performance significantly, as we expect from an improved constant ν = C F − 1. Figure 4a, c show almost perfect recovery for m 0 = 45, m 1 = 5, while suffering only few false positives.
For tanh-networks, Fig. 4d shows that increasing m 0 benefits recovery of weights in both layers, while increasing m 1 benefits recovery of first layer weights and harms recovery of second layer weights. We still achieve small false positive rates in Fig. 4b, and good recovery for m 0 = 45, and the trend continues when further increasing m 0 .
Finally, a notable difference between the perturbed orthogonal case and the unitsphere case is the behavior of the projection error PŴ − P W F /(m 0 + m 1 ) for networks with sigmoid activation function. Comparing Figs. 2a and 5a, the dependency of the projection error on m 1 is stronger when sampling independently from the unitsphere. This is explained by Theorem 5 since B 2 is independent of m 1 in the perturbed orthogonal case and grows like O( √ m 1 ) when sampling from the unitsphere.

Open Problems
With the previous theoretical results of Sect. 3  (i) In Theorem 5, the dependency of α > 0 on the network architecture and on the input distribution μ X is left implicit. However, it plays a crucial role for fully estimating the overall sample complexity. (ii) While we could prove that Algorithm 3 is increasing the spectral norm of its iterates inŴ ∩ S, we could not show yet that it converges always to nearly rank-1 matrices inŴ, despite it is so numerically observed, see also Remark 24. We also could not exclude the existence of spurious local minimizers of the nonlinear program (31), as stated in Theorem 16. However, we conjecture that there are none or that they are somehow hard to observe numerically. In the case where g i (·) = φ(·−θ i ) and h (·) = φ(·−τ ), this would simply mean to be able to identify the shifts θ i , i ∈ [m 0 ], and τ , ∈ [m 1 ]. Such identification is also crucial for computing the matrix G 0 = diag(g i (0), . . . , g m 0 (0)) which allows the disentanglement of the weights b from the weights A and v = AG 0 b / AG 0 b 2 . At this point, the network is fully reconstructed. (v) The generalization of our approach to networks with more than two hidden layers is clearly the next relevant issue to be considered as a natural development of this work.
While problems (i) and (ii) seem to be difficult to solve by the methods we used in this paper, we think that problems (iii) and (iv) are solvable both theoretically and numerically with just a bit more effort. For a self-contained conclusion of this paper, in the following sections we sketch some possible approaches to these issues, as a glimpse towards future developments, which will be more exhaustively included in [21]. The generalization of our approach to networks with more than two hidden layers as mentioned in (v) is surprisingly simpler than one may expect, and it is in the course of finalization [21]. For a network f ( 1 x)) . . . ), with L > 2, again by second order differentiation is possible to obtain an approximation spacê of the matrix space spanned by the tensors of entangled weights, where G i are suitable diagonal matrices depending on the activation functions. The tensors

Reconstruction of the Entire Network
In this section, we address problems (iii) and (iv) as described in Sect. 5. Our final goal is of course to construct a two-layer networkf with number of nodes equaling m 0 and m 1 such thatf ≈ f . Additionally we also study whether the individual building blocks (e.g., matricesÂ,B, and biases in both layers) off match their corresponding counterparts of f . To constructf , we first discuss how recovered entangled weights {ŵ i : i ∈ [m 0 + m 1 ]} (see Sect. 4) can be assigned to either the first, or the second layer, depending on whetherŵ j approximates one of the a i 's, or one of the v 's. Afterwards we discuss a modified gradient descent approach that optimizes the deparametrized network (its entangled weights are known at this point!) over the remaining, unknown parameters of the network function, e.g., biases θ i and τ .

Distinguishing First and Second Layer Weights
Attributing approximate entangled weights to first or second layer is generally a challenging task. In fact, even the true weights {a i : i ∈ [m 0 ]}, {v : ∈ [m 1 ]} cannot be assigned to the correct layer based exclusively on their entries when no additional a priori information (e.g., some distributional assumptions) is available. Therefore, assigningŵ j , j ∈ [m 0 + m 1 ] to the correct layer requires using again the network f itself and thus to query additional information.
The strategy we sketch here is designed for sigmoidal activation functions and networks with (perturbed) orthogonal weights in each layer. Sigmoidal functions are monotonic, have bell-shaped first derivative, and are bounded by two horizontal asymptotes as the input tends to ±∞. If activation functions {g i : i ∈ [m 0 ]} and {h : ∈ [m 1 ]} are translated sigmoidal, their properties imply  (43), it follows that ∇ f (ta i ) is expected to tend to 0 much slower than ∇ f (tv as t → ∞. In fact, if {a i : i ∈ [m 0 ]} was an exactly orthonormal system, ∇ f (ta i ) eventually would equal a positive constant when t → ∞. We illustrate in Fig. 6 the different behavior of the trajectories t → ∇ f (tw) 2 for w ∈ {ŵ j ≈ a i for some i} and for w ∈ {ŵ j ≈ v for some }.
Practically, for T ∈ N and for each candidate vector in {ŵ j : j ∈ [m 0 + m 1 ]} we query f to compute f (t kŵ j ) for few steps {t k : k ∈ [T ]} in order to approximate   We consider the same scenarios as in Sect. 4, e.g., POD/sig stands for perturbed orthogonal design with sigmoid activation, and S m 0 −1 / tanh for weights sampled independently from the unit sphere with tanh activation Then we compute a permutation π : [m] → [m] to order the weights so thatÎ(w π(i) ) ≥ I(w π( j) ) whenever π(i) > π( j).
to assess the proposed strategy. Hyperparameters are = 10 −5 for the step length in the finite difference approximation f (·), and t k = −20 + k for k ∈ [40]. The results for all four scenarios considered in Sect. 4 are reported in Table 1. We see that our simple strategy achieves remarkable success rates, in particular if the network weights in each layer represent perturbed orthogonal systems. If the weights are sampled uniformly from the unit sphere with moderated dimension d = m 0 , then, as one may expect, the success rate drops. In fact, for small d = m 0 , the vectors {a i : i ∈ [m 0 ]} tend to be less orthogonal, and thus, the assumption a T i a j ≈ 0 for i = j is not satisfied anymore.
Finally, we stress that the proposed strategy is simple, efficient and relies only on few additional point queries of f that are negligible compared to the recovery step itself (for reasonable query size T ). In fact, the method relies on a single (nonlinear) feature of the map t → ∇ f (tŵ j ) 2 in order to decide upon the label ofŵ j . We identify it as an interesting future investigation to develop more robust approaches, potentially using higher-dimensional features of trajectories t → ∇ f (tŵ j ) 2 , to achieve high success rates even if a T i a j ≈ 0 for i = j may not hold anymore.

Reconstructing the Network Function Using Gradient Descent
The previous section allows assigning unlabeled candidates {ŵ j : j ∈ [m 0 + m 1 ]} to either the first or second layer, resulting in matricesÂ = [â 1 | . . . |â m 0 ] andV = [v 1 | . . . |v m 1 ] that ideally approximate A and V up to column signs and permutations.
Assuming that the network f ∈ F(m 0 , m 0 , m 1 ) is generated by shifts of one activation function, i.e., g i (t) = φ(t + θ i ) and h (t) = φ(t + τ ) for some φ, this means only signs, permutations, and bias vectors θ ∈ R m 0 , τ ∈ R m 1 are missing to fully reconstruct f . In this section, we show how to identify these remaining parameters by applying a gradient descent method to minimize the least squares of the output misfit of the deparametrized network. In fact, as we clarify below, the original network f can be explicitly described as a function of the known entangled weights a i and v and of the unknown remaining parameters (signs, permutations, and biases), see Proposition 26 and Corollary 27.
Let now D m denote the set of m × m diagonal matrices, and define a parameter space to a number of additionally sampled points N (0, Id m 0 ). The parameter fitting can be formulated as solving the least squares min (45) We note that, due to the identification of the entangled weights and deparametrization of the problem, dim( ) = 3m 0 + 2m 1 , which implies that the least squares has significantly fewer free parameters compared to the number m 2 0 + (m 0 × m 1 ) + (m 0 + m 1 ) of original parameters of the entire network. Hence, our previous theoretical results of Sect. 3 and numerical experiments of Sect. 4 greatly scale down the usual effort of fitting all parameters at once. We may also mention at this point that the optimization (45) might have multiple global solutions due to possible symmetries, see also [20] and Remark 28, and we shall try to keep into account the most obvious ones in our numerical experiments.
We will now show that there exists parameters (D 1 , D 2 , D 3 , w, z) ∈ that allow for exact recovery of the original network, wheneverÂ andV are correct up to signs and permutation. We first need the following proposition that provides a different reparametrization of the network usingÂ andV . The proof of the proposition requires only elementary linear algebra and properties of sign and permutation matrices. Details are deferred to Appendix 3.
If there are sign matrices S A , S V , and permutations π A , π V such that Aπ We note here that replacingf byf in (45) is tempting because it further reduces the number of parameters (dim(D m 0 × D m 1 × R m 0 × R m 1 ) = 2(m 0 + m 1 )), but, by an explicit computation, one can show that evaluating the gradient off with respect to D requires also the evaluation of D −1 . Having in mind that D ideally converges to S A during the optimization, diagonal entries of D are likely to cross zero while optimizing. Thus such minimization may result unstable, and we instead work withf . The following Corollary shows that also this form allows finding optimal parameters leading to the original network.
Corollary 27 Let f ∈ F(m 0 , m 0 , m 1 ) with g i (t) = φ(t + θ i ) and h (t) = φ(t + τ ). If there exist sign matrices S A , S V , and permutations π A , π V such that Aπ A = AS A , Vπ V =V S V , there exist diagonal matrices D 1 , Proof Based on Proposition 26, we can rewrite Using this, and D = S A in the definition ofB in Proposition 26, it follows that Multiplying by S V from the left, we obtain

Remark 28 (Simplification for odd functions) If φ in Proposition 26 satisfies
AssumingÂ andV are correct up to sign and permutation, Corollary 27 implies that J = 0 is the global optimum, and it is attained by parameters leading to the original network f . Furthermore, Remark 28 implies that there is ambiguity with respect to D 3 , if φ is an odd function. Thus we can also prescribe D 3 = Id m 0 and neglect optimizing this variable if φ is odd.
We now study numerically the feasibility of (45). First, we consider the caseÂ = A andV = V to assess (45), isolated so not to suffer possible errors from other parts of our learning procedure (see Sects. 4 and 6.1). Afterwards we take into consideration also these additional approximations, and present results forÂ ≈ A andV ≈ V .
We minimize (45) by standard gradient descent and learning rate 0.5 if φ(t) = 1 1+e −t − 1 2 (shifted sigmoid), respectively, learning rate 0.025 if φ(t) = tanh(t). We sample m f = 10(m 0 + m 1 ) additional points, which is only slightly more than the number of free parameters. Gradient descent is run for 500K iterations (due to small number of variables, this is not time consuming), and only prematurely stopped it, if the iteration stalls. Initially we set D 2 = D 3 = Id m 0 , and all other variables are set to random draws from N (0, 0.1).
Denoting ω * = (D * 1 , D * 2 , D * 3 , w * , z * ) ∈ as the gradient descent output, we measure the relative mean squared error (MSE) and the relative L ∞ -error The scenarios correspond to those in Sects. 4 and 6.1 using m test = 50,000 samples Z i ∼ N (0, Id m 0 ). Moreover, we also report the relative bias errors which indicate if the original bias vectors are recovered. We repeat each experiments 30 times and report averaged values. Table 2 presents the results of the experiments and shows that we reconstruct a network function that is very close to the original network f in both L 2 and L ∞ norm, and in every scenario. The maximal error is ≈ 10 −3 , which is likely further reducible by increasing the number of gradient descent iterations, or using finer-tuned learning rates or acceleration methods. Therefore, the experiments strongly suggest that we are indeed reconstructing a function that approximates f uniformly well. Inspecting the errors E θ and E η also supports this claim, at least in all scenarios where the tanh activation is used. In many cases, the relative errors are below 10 −7 , implying that we recover the original bias vectors of the network. Surprisingly, the accuracy of recovered biases slightly drops of few orders of magnitude in the sigmoid case, despite convincing results when measuring predictive performance in L 2 and E ∞ . We believe that this is due to faster flattening of the gradients around the stationary point compared to the case of a tanh activation function and that it can be improved by using Trials indicates the percentage of repetitions whereÂ andV satisfy (46). The scenarios correspond to those in Sects. 4 and 6.1 more sophisticated strategies of choosing a gradient descent step size. We also tested (45) when fixing D = Id m 0 since tanh and the shifted sigmoid are odd functions, and thus Remark 28 applies. The results are consistently slightly better than Table 2, but are qualitatively similar. We ran similar experiments for perturbed orthogonal weights and when usingÂ andV precomputed with the methods we described in Sects. 4 and 6.1. The quality of the results varies dependent on whetherÂ ≈ A andV ≈ V (up to sign and permutation) holds, or a fraction of the weights has not been recovered. To isolate cases whereÂ ≈ A andV ≈ V holds, we compute averaged MSE and L ∞ over all trials satisfying We report the averaged errors and the number of trials satisfying this condition in Table 3. It shows that the reconstructed function is close to the original function, even if the weights are only approximately correct. Therefore we conclude that that minimizing (45) (17), there exist 0 < ξ 1 , ξ 2 < such that Hence, we obtain Assume k, n to be fixed and denotex = x + ξ 1 e k + ξ 2 e n . By recalling our definition of Thus As before, we start by applying the Lipschitz continuity to the summands of a 2 ki a nj + a ni a ki a nj + a k j a ki a nj + a 2 nj a ki b i b j .
Applying the triangle inequality of the Frobenius norm results iñ a 2 ki a nj + a ni a ki a nj + a k j a ki a nj + a 2 a ni a ki a nj a ni a ki a nj The last inequalities are due to Here we denote ξ 1,kn , ξ 2,kn , to make clear that ξ 1 , ξ 2 are changing for every partial derivative of second order. However, all ξ 1,kn , ξ 2,kn are bounded by , so our result still holds. Applying the same procedure to |ϕ 2 (x) − ϕ 2 (x)| yields ≤ η 1 κ 3 (|a ki | + |a ni |) + κ 2 η 2 κ 1 m 1 I =1 b I (|a I k | + |a I n |) .
By settingĈ = max {η 1 κ 3 , κ 1 κ 2 η 2 }, we can can develop the same bounds for both parts of the right sum as for ϕ 1  Finally, we get Then for any X ∈ span {w ⊗ w : ∈ [m]} ∩ S with rank(X ) = 1, there exists * such that X = w * ⊗ w * .

Lemma 32
Let W,Ŵ be matrix subspaces of equal dimensions (e.g., subspaces of matrices in R d×d ) with corresponding orthogonal projections P W , PŴ . Assume δ := P W − PŴ F < 1. For any W ∈ W we have In particular PŴ : W →Ŵ is a bijection.
Proof The right inequality follows by Then Moreover, for any unit norm vector v and anyŴ j , we have Proof We first note that For (48), we assume without loss of generality max σ k = σ ∞ (otherwise we perform the proof for −M), and denote j = arg max i σ i . Then we have For (49), we first notice that and thus is suffices to bound the last two terms. For the first term, we get and for the second For (50), we first rewrite since Id − W j is a projection matrix onto span{w j } ⊥ .

Proof of Lemma 15
We first calculate a lower bound for λ D in terms of the min i σ i by where C = c f if min i σ i > 0 and C = C F if min i σ i ≤ 0. We are left with bounding σ j * := min i σ i . Clearly, if σ j * > 0, the result follows immediately. Therefore, we concentrate on the case σ j * ≤ 0 in the following. We first use (32) to get Applying now Lemma 33, and Ŵ j * ≥ 1 − δ, we obtain from (51) Using this in the previously derived bound for λ m , and using C F < 1 + ν, we have Since δ, ν < 1/4 we obtain from (48) that σ ∞ ≤ 2, and

Lemma 34 Let (A, d) be a metric space and F : A → A be a continuous function.
Let (X j ) j∈N be a sequence generated by X j = F j (X 0 ) for some X 0 ∈ A and assume d(X j+1 , X j ) → 0. Then any convergent subsequence of (X j ) j∈N converges to a fixed point of F.
Proof Let (X j k ) k∈N be a convergent subsequence of (X j ) j∈N with limitX = lim k→∞ X j k . Then the subsequence X j k +1 satisfies d(X j k +1 ,X ) ≤ d(X j k +1 , X j k ) + d(X j k ,X ) → 0 as k → ∞ and thus also (X j k +1 ) k∈N converges toX . By construction X j k +1 = F(X j k ). Taking the limit k → ∞ on both sides and using the continuity of F, we getX = lim k→∞ X j k +1 = lim k→∞ F(X j k ) = F lim k→∞ X j k = F(X ).

Proof of Proposition 26
Proof of Proposition 26 The first step is to replace first layer weights A byÂS A . This can be achieved by inserting the permutation π A in the first layer and replacing bŷ AS A according to Next we need to replace the matrix π T A B using V respectivelyV . Let n ∈ R m 1 be defined as n = AGb −1 , and N = diag(n). By definition of the entangled weights, we have V = AG B N , implying the relation B = G −1 A −1 V N −1 . Using assumptions A =ÂS A π T A and V =V S V π T V , and the properties S −1 A = S A , π −1 A = π T A , it follows that Since G = diag(φ (θ )), we have π T A Gπ A = diag(π T A (φ (θ ))) = diag((φ (π T A θ ))) =: G. Inserting into (52), we get The dot product with a 1-vector is permutation invariant; hence, we can get an additional π T V into the second layer. Then, using that the diagonal matrixÑ := π T V N π V commutes with S V we get It remains to show thatB =G −1 S AÂ −1VÑ −1 , which is implied ifÑ = G −1 S AÂ −1v . By the normalization property b = 1 (see Definition 3) and B = G −1 A −1 V N −1 , we first have Using this, and the assumptions A −1 = π A S AÂ −1 , V π V =V S V , we obtaiñ where we used that S V affects v only by multiplication with ±1. The result follows since π T A is orthogonal and thus G −1 π A S AÂ −1v