The Mori-Zwanzig formulation of deep learning

We develop a new formulation of deep learning based on the Mori-Zwanzig (MZ) formalism of irreversible statistical mechanics. The new formulation is built upon the well-known duality between deep neural networks and discrete dynamical systems, and it allows us to directly propagate quantities of interest (conditional expectations and probability density functions) forward and backward through the network by means of exact linear operator equations. Such new equations can be used as a starting point to develop new effective parameterizations of deep neural networks, and provide a new framework to study deep-learning via operator theoretic methods. The proposed MZ formulation of deep learning naturally introduces a new concept, i.e., the memory of the neural network, which plays a fundamental role in low-dimensional modeling and parameterization. By using the theory of contraction mappings, we develop sufficient conditions for the memory of the neural network to decay with the number of layers. This allows us to rigorously transform deep networks into shallow ones, e.g., by reducing the number of neurons per layer (using projection operators), or by reducing the total number of layers (using the decay property of the memory operator).


Introduction
It has been recently shown that new insights on deep learning can be obtained by regarding the process of training a deep neural network as a discretization of an optimal control problem involving nonlinear differential equations [16,15,20].One attractive feature of this formulation is that it allows us to use tools from dynamical system theory such as the Pontryagin maximum principle or the Hamilton-Jacobi-Bellman equation to study deep learning from a rigorous mathematical perspective [33,21,37].For instance, it has been recently shown that by idealizing deep residual networks as continuous-time dynamical systems it is possible to derive sufficient conditions for universal approximation in L p , which can also be understood as an approximation theory that leverages flow maps generated by dynamical systems [34].
In the spirit of modeling a deep neural network as a flow of a discrete dynamical system, in this paper we develop a new formulation of deep learning based on the Mori-Zwanzig (MZ) formalism.The MZ formalism was originally developed in statistical mechanics [41,64] to formally integrate under-resolved phase variables in nonlinear dynamical systems by means of a projection operator.One of the main features of such formulation is that it allows us to systematically derive exact evolution equations for quantities of interest, e.g., macroscopic observables, based on microscopic equations of motion [9,23,25,7,5,55,14,61,62].
In the context of deep learning, the MZ formalism can be used to reduce the total number of degrees of freedom of the neural network, e.g., by reducing the number of neurons per layer (using projection operators), or by transforming deep networks into shallows networks, e.g., by approximating the MZ memory operator.Computing the solution of the MZ equation for deep learning is not an easy task.One of the main challenges is the approximation of the memory term and the fluctuation (noise) term, which encode the interaction between the so-called orthogonal dynamics and the dynamics of the quantity of interest.In the context of neural networks, the orthogonal dynamics is essentially a discrete high-dimensional flow governed by a difference equation that is hard to solve.Despite these difficulties, the MZ equation of deep learning is formally exact, and can be used as a starting point to build useful approximations and parameterizations that target the output function directly.Moreover, it provides a new framework to study deep-learning via operator theoretic approaches.For example, the analysis of the memory term in the MZ formulation may shed light on the behaviour of recent neural network architectures such as the long short-term memory (LSTM) network [51,19].
This paper is organized as follows.In section 2 we briefly review the formulation of deep learning as a control problem involving a discrete stochastic dynamical system.In section 3 we introduce the composition and transfer operators associated with the neural network.Such operators are the discrete analogues of the stochastic Koopman [54,63] and Frobenius-Perron operators in classical continuous-time nonlinear dynamics.In the neural network setting the composition and transfer operators are integral operators with kernel given by the conditional transition density between one layer and the next.In section 4 we discuss different training paradigms for stochastic neural networks, i.e., the classical "training over weights" paradigm, and a novel "training over noise" paradigm.Training over noise can be seen as an instance of transfer learning in which we optimize for the PDF of the noise to re-purpose a previously trained neural network to another task, without changing the neural network weights and biases.In section 5 we present the MZ formulation of deep learning and derive the operator equations at the basis of our theory.In section 6 we introduce a particular class of projection operators, i.e., Mori's projections [62] and study their properties.In section 7 we develop the analysis of the MZ equation, and derive sufficient conditions under which the MZ memory term decays with the number of layers.This allows us to approximate the MZ memory term with just a few terms and re-parameterize the network accordingly.The main findings are summarized in section 8.We also include two appendices in which we establish theoretical results concerning the composition and transfer operators for neural networks with additive random perturbations, and prove the Markovian property of neural networks driven by discrete random processes characterized by statistically independent random vectors.

Modeling neural networks as discrete stochastic dynamical systems
We model a neural network with L layers as a discrete stochastic dynamical system of the form Here, the index n labels a specific layer in the network, H n is the transition function of the (n + 1) layer, X 0 ∈ R d is the network input, X n ∈ R dn is the output of the n-th layer 1 , {ξ 0 , . . ., ξ L−1 } are random vectors, and w n ∈ R qn are parameters characterizing the (n + 1) layer.We allow the input X 0 to be random.Furthermore, we assume that the random vectors {ξ 0 , . . ., ξ L−1 } are statistically independent, and that ξ n is independent of past and current states, i.e., {X 0 , . . ., X n }.In this assumption, the neural network model (1) defines a Markov process {X n } (see Appendix B).Further assumptions about the mapping H n and its relation to the noise process will be stated in subsequent sections.The general formulation (1) includes the following important classes of neural networks: 1. Neural networks perturbed by additive random noise (Figure 1).These models are of the form Figure 1: Sketch of a stochastic neural network model of the form (2), with L layers and N neurons per layer.We assume that the random vectors {ξ0, . . ., ξL−1} are statistically independent, and that ξn is independent of past and current states, i.e., {X0, . . ., Xn}.With these assumptions, {X0, . . ., XL} is a Markov process (see Appendix B).
The mapping F n is often defined as a composition of a layer-dependent affine transformation with an activation function ϕ, i.e., where W n is a d n+1 × d n weight matrix, and b n ∈ R d n+1 is a bias vector.
2. Neural networks perturbed by multiplicative random noise (Figure 2).These models are of the form where M n (X n ) is a matrix depending on X n .

3.
Neural networks with random weights and biases [18,58].These models are of the form where Z n are random weight matrices, and z n are random bias vectors.The pairs {Z n , z n } and {Z j , z j } are assumed to be statistically independent for n = j.Moreover, {Z j , z j } are independent of the neural network states {X 0 , . . ., X j } for all j = 0, . . ., L − 1.
In this article, we will focus our attention primarily on neural network models with additive random noise, i.e., models of the form (2). The functional setting for these models is extensively discussed in Appendix A. The neural network output is usually written as where α is a vector of output weights, and E [X L |X 0 = x] is the expectation of the random vector X L conditional to X 0 = x.In the absence of noise, (6) reduces to the well-known function composition rule Figure 2: Sketch of the stochastic neural network model ( 4).We assume that the random vectors {ξ0, . . ., ξL−1} are statistically independent, and that ξn is independent of past and current states, i.e., {X0, . . ., Xn}.In this assumption, the neural network model ( 4) defines a Markov process {Xn}.Note that the dimension of the vectors Xn can vary from layer to layer, e.g., in encoding or decoding neural networks.
The neural network parameters {α, w 0 , . . ., w L−1 } appearing in ( 6) or ( 7) are usually determined by minimizing a dissimilarity measure between q L (x) and a given target function f (x) (supervised learning).By adding random noise to the neural network, e.g., in the form of additive noise or by randomizing weights and biases, we are essentially adding an infinite number of degrees of freedom to the system, which can be leveraged for training and transfer learning (see section 4).

Composition and transfer operators for neural networks
In this section we derive the composition and transfer operators associated with the neural network model (1), which map, respectively, the conditional expectation E {u(X L )|X n = x} (where u(•) is a userdefined measurable function) and p n (x) (the probability density of X n ) forward and backward across the network.To this end, we assume that the random vectors {ξ 0 , . . ., ξ L−1 } in (1) are statistically independent, and that ξ n is independent of past and current states, i.e., {X 0 , . . ., X n }, With these assumptions, {X n } in (1) is a discrete Markov process (see Appendix B).Hence, the joint probability density function (PDF) of the random vectors {X 0 , . . ., X L }, i.e., joint PDF of the state of the entire neural network, can be factored2 as p(x 0 , . . ., By using the identity (Bayes' theorem) we see that the chain of transition probabilities (8) can be reverted, yielding From these expressions, it follows that for all indices n, j and q in {0, . . ., L}, excluding n = j = q.The transition probability equation ( 11) is known as discrete Chapman-Kolmogorov equation and it allows us to define the transfer operator mapping the PDF p n (x n ) into p n+1 (x n+1 ), together with the composition operator for the conditional expectation E{u(x L )|X n = x n }.As we shall see hereafter, the discrete composition and transfer operators are adjoint to one another.

Transfer operator
Let us denote by p q (x) the PDF of X q , i.e., the output of the q-th neural network layer.We first define the operator that maps p q (x) into p n (x).By integrating the joint probability density of X n and X q , i.e., p n|q (x|y)p q (y) with respect to y we immediately obtain p n (x) = p n|q (x|y)p q (y)dy. ( At this point, it is convenient to define the linear operator N (n, q) is known as transfer (or Frobenius-Perron) operator [14].From a mathematical viewpoint N (n, q) is a integral operator with kernel p n|q (x, y), i.e., the transition density integrated "from the right".It follows from the Chapman-Kolmogorov identity (11) that the set of integral operators {N (n, q)} satisfies where I is the identity operator.The operator N allows us to map the one-layer PDF, e.g., the PDF of X q , either forward or backward across the neural network (see Figure 3).As an example, consider a network with four layers and states X 0 (input), X 1 , X 2 , X 3 , and X 4 (output).Then Eq. ( 13) implies that, In summary, we have p n (x) = N (n, q)p q (x) ∀n, q ∈ {0, . . ., L}, where N (n, q)p q (x) = p n|q (x|y)p q (y)dy.
We emphasize that modeling the PDF dynamics via neural networks has been studied extensively in machine learning, e.g., in the theory of normalizing flows for density estimation or variational inference [49,30,52].

Composition operator
For any measurable deterministic function u(x), the expectation of u(X j ) conditional to X n = x is defined as A substitution of ( 11) into ( 17) yields which holds for all j, n, q ∈ {0, . . ., L − 1}.At this point, it is convenient to define the integral operator which is known as composition [14] or "stochastic Koopman" [54,63] operator.The operator (19) is also related to the Kolmogorov backward equation [47] .Thanks to the Chapman-Kolmogorov identity (11), the operators M(q, j) satisfy M(n, q) = M(n, j)M(j, q), M(j, j) = I, ∀n, j, q ∈ {0, . . ., L}, where I is the identity operator.Equation (20) allows us to map the conditional expectation (17) of any measurable phase space function u(X j ) forward or backward through the network.As an example, consider again a neural network with four layers and states {X 0 , . . ., X 4 }.We have Equation ( 21) holds for every j ∈ {0, .., 4}.Of particular interest in the machine-learning context is the conditional expectation of u(X L ) (network output) given X 0 = x (network input), which can be computed as i.e., by propagating u(x) = E{u(X L )|X L = x} backward through the neural network using single layer operators M(i − 1, i).Similarly, we can compute, e.g., E{u(X 0 )|X L = x} as For subsequent analysis, it is convenient to define In this way, if E{u(X L )|X n = x} is propagated backward through the network by M(n − 1, n), then q n (x) is propagated forward by the operator In fact, equations ( 24)-( 25) allow us to write (22) in the equivalent form i.e., as a forward propagation problem (see Figure 3).Note that we can write (26) (or (22)) explicitly in terms of iterated integrals involving single-layer transition densities as

Relation between composition and transfer operators
The integral operators M and N defined in ( 19) and ( 13) involve the same kernel function, i.e., the multilayer transition density p q|n (x, y).In particular, M(n, q) integrates p q|n "from the left", while N (q, n) integrates it "from the right".It is easy to show that M(n, q) and N (q, n) are adjoint to each other relative to the standard inner product in L 2 (see [14] for the continuous-time case).In fact, Therefore M(q, j) * = N (j, q) ∀q, j ∈ {0, . . ., L}, where M(q, j) * denotes the operator adjoint of M(q, j) with respect to the L 2 inner product.By invoking the definition (25), we can also write (29) as In Appendix A we show that if the cumulative distribution function of each random vector ξ n in the noise process has partial derivatives that are Lipschitz continuous in R(ξ n ) (range of ξ n ), then the composition and transfer operators defined in Eqs.(19) and 13 are bounded in L 2 (see Proposition 16 and Proposition 17).Moreover, is possible to choose the probability density of ξ n such that the single layer composition and transfer operators become strict contractions.This property will be used in section 7 to prove that the memory of a stochastic neural network driven by particular types of noise decays with the number of layers.

Multi-layer conditional transition density
We have seen that the composition and the transfer operators M and N defined in Eqs. ( 19) and ( 13), allow us to push forward and backward conditional expectations and probability densities across the neural network.Moreover, such operators are adjoint to one another (section 3.3), and also have the same kernel, i.e., the transition density p n|q (x n |x q ).In this section, we derive analytical formulas for the one-layer transition density p n+1|n (x n+1 |x n ) corresponding to the neural network models we discussed in section 2. The multi-layer transition density p n|q (x n |x q ) is then obtained by composing one-layer transition densities as follows We first consider the general class of stochastic neural network models defined by equation (1).By the definition of conditional probability density, we have By assumption, p ξn|Xn (ξ n |x n ) = ρ n (ξ n ) (the random vector ξ n is independent of X n ) and therefore where we denoted by δ(•) the Dirac delta function, and set ρ n (ξ n ) = p ξn (ξ n ).The delta function arises because if x n and ξ n are known then x n+1 is obtained by a purely deterministic relationship, i.e, Eq. ( 1).The general expression (33) can be simplified for particular classes of stochastic neural network models.For example, if the neural network has purely additive noise as in equation (2), then by using elementary properties of the delta function we obtain Note that such transition density depends on the PDF of random vector ξ n (i.e., ρ n ), the one-layer transition function F n , and the parameters w n .Similarly, one-layer transition density associated with the stochastic neural network model (4) can be computed by substituting 33).This yields By using well-known properties of the multivariate delta function [28] it is possible to re-write the integrand in (35) in a more convenient way.For instance, if the matrix M n (x n ) has full rank then which yields Other cases where M n (x n ) is not a square matrix can be handled similarly [45,28].Finally, consider the neural network model with random weights and biases (5).The one-layer transition density in this case can be expressed as where p(Z n , z n ) is the joint PDF of the weight matrix and bias vector in the n-th layer.
Remark: The transition density (34) associated with the neural network model ( 2) can be computed explicitly once we choose a probability model for ξ n ∈ R N .For instance, if we assume that {ξ 0 , ξ 1 , . . ., ξ L−1 } are i.i.d.Gaussian random vectors with PDF, then we can explicitly write the one-layer transition density (34) as In Appendix A we provide an analytical example of transition density for a neural network with two layers, one neuron per layer, tanh(•) activation function, and uniformly distributed random noise.

The zero noise limit
An important question is what happens to the neural network as we send the amplitude of the noise to zero.To answer this question consider the neural network model (2) with N neurons per layer, and introduce the parameter ≥ 0, i.e., We are interested in studying the orbits of the discrete dynamical system (41) as → 0. To this end, we assume {ξ n } independent random vectors with density ρ n (x).This implies that the PDF of ξ n is It is shown in [31,Proposition 10.6.1] that the transfer operator N (n + 1, n) associated with (41), i.e., converges in norm to the Frobenius-Perron operator corresponding to F n (X n , w n ) as → 0. Indeed, in the limit → 0 we have, formally Substituting this expression into (13), one gets, Similarly, a substitution into equation ( 26) yields Iterating this expression all the way back to n = 1 yields the familiar function composition rule for neural networks, i.e., Recalling that q 0 (x) = u(x) and assuming that u(x) = Ax (linear output layer), where A is a matrix of output weights and x is a column vector, we can write (47) as If u(x) is a linear scalar function, i.e., u(x) = α • x then (48) coincides with equation ( 7).

Training paradigms
By adding random noise to a neural network we are essentially adding an infinite number of degrees of freedom to our system.This allows us to rethink the process of training the neural network from a probabilistic perspective.In particular, instead of optimizing a performance metric3 relative to the neural network weights w = {w 0 , w 1 , . . ., w L−1 } (classical "training over weights" paradigm), we can now optimize the transition density4 p n+1|n (x n+1 |x n ).Clearly, such transition density depends on the neural network weights and on the functional form of the one-layer transition function, e.g., as in equation (34).Hence, if we prescribe the PDF of ξ n (e.g., ρ n in (34)), then the transition density p n+1|n is uniquely determined by the functional form of function F n , and by the weights w n .On the other hand, if we are allowed to choose the PDF of the random vector ξ n , then we can optimize it during training.This can be done while keeping the neural network weights w n fixed, or by including them in the optimization process.
The interaction between random noise and the nonlinear dynamics modeled by the network can yield surprising results.For example, in stochastic resonance [43,56] it is well known that random noise added to a properly tuned a bi-stable system can induce a peak in the Fourier power spectrum of the output -hence effectively amplifying the signal.Similarly, the random noise added to a neural network can be leveraged to achieve specific goals.For example, noise allows us to re-purpose a previously trained network on a different task without changing the weights of network.This can be seen as an instance of stochastic transfer learning.To describe the method, consider the two-layer neural network model with N neurons per layer, input X 0 ∈ Ω 0 ⊆ R d , linear output u(x) = α • x, hyperbolic tangent activation function, and intra-layer random perturbation ξ 0 .We are interested in training the input-output map represented by the conditional expectation (see Eq. ( 6)) Let us first re-write ( 52) in a more explicit form.To this end, we recall that where R(X 2 ) denotes the range of the mapping X 2 = F 1 (F 0 (X 0 , w 0 ) + ξ 0 , w 1 ) for X 0 ∈ Ω 0 and arbitrary weights w 0 and w 1 .By using the definition of the operator G(i + 1, i) in ( 25) and the composition rule q i+1 = G(i + 1, i)q i (i = 0, 1) we easily obtain and where R(ξ 0 ) is the range of the random variable ξ 0 , i.e., the support of ρ 0 .Hence, we can equivalently write input-output map (52) as

Training over weights
In the absence of noise, the PDF of ξ 0 appearing in (56), i.e, ρ 0 (z), reduces to the delta function δ(z).Hence, the output of the neural network (56) can be written as This is consistent with the well-known composition rule for deterministic networks.The parameters {α, w 0 , w 1 } appearing in (57) can be optimized to minimize a dissimilarity measure between q 2 (x) and a given target function f (x), e.g., relative to the L 2 (Ω 0 ) norm The brackets [•] here are used to label the data points.

Training over noise
By adding noise ξ 0 ∈ R N to the output of the first layer we obtain the input-output map (56), hereafter rewritten for convenience where ρ 0 denotes the PDF of ξ 0 .Equation ( 60) looks like a Fredholm integral equation of the first kind.In fact, it can be written as where However, differently from standard Fredholm equations of the first kind, in (61) we have that x ∈ Ω 0 ⊆ R d while ξ ∈ R N , i.e., the integral operator with kernel κ 2 maps functions with N variables into functions with d variables.We are interested in finding a PDF ρ 0 (y) that solves (60) for a given function h(x), i.e., find ρ 0 such that If such PDF ρ 0 exists, then we can re-purpose the neural network (57) with output q 2 (x) f (x) to approximate a different function h(x), without modifying the weights {w 1 , w 0 } but rather simply adding noise ξ 0 between the first and the second layer, and then averaging the output over the PDF ρ 0 .Equation ( 63) is unfortunately ill-posed in the space of probability distributions.In other words, for a given kernel κ 2 , and a given target function h(x), there is (in general) no PDF ρ 0 that satisfies (63) exactly.However, one can proceed by optimization.For instance, ρ 0 can be determined by solving the constrained least squares problem 5{ρ 0 , α} = argmin (64) Note that the training-over-noise paradigm can be seen as an instance of transfer learning [44], in which we turn the knobs on the PDF of the noise ρ 0 (changing it from a Dirac delta function to a proper PDF), and eventually the coefficients α, to approximate a different function while keeping the neural network weights and biases fixed.Training over noise can also be performed in conjunction with training over weights, to improve the overall optimization process of the neural network.The five-dimensional random vector ξ0 is assumed to have statistically independent components.We proceed by first training the neural network with no noise on the target function f (x) defined in (65).Subsequently, we perturb the network with the random vector ξ0, and optimize the PDF of ξ0 so that the conditional expectation of the neural network output, i.e., (71), approximates a second target function h(x) for the same weights and biases.
An example: Let us demonstrate the "training over noise" and the "training over weights' paradigms with a simple numerical example.Consider the following one-dimensional function We are interested in approximating f (x) with the two-layer neural network depicted in Figure 4 (N = 5 neurons per layer).In the absence of noise, the output of the network is given by equation ( 57), hereafter rewritten in full form for tanh(•) activation functions [50] Here, W 0 , b 0 , b 1 and α are five-dimensional column vectors, while W 1 is a 5 × 5 matrix.Hence, the input-output map (66) has 45 free parameters {W 0 , W 1 , b 0 , b 1 , α} which are determined by minimizing the discrete 2-norm where {x [1], . . ., x [30]} is an evenly-spaced set of points in [0, 1] In Figure 5 we show the neural network output (66) we obtained by minimizing the cost (67) relative to the weights {W 0 , W 1 , b 0 , b 1 , α} (training over weights paradigm).
Next, we add noise to our fully trained deterministic neural network.Specifically, we perturb the output of the first layer by an additive random vector ξ 0 with independent components supported in [−0.4,0.4].Since the random vector ξ 0 is assumed to have independent components, we can write its PDF ρ 0 as where {ρ 1 0 , . . ., ρ N 1 } are one-dimensional PDFs, each one of which is supported in [−0.4,0.4].In the training-over-noise paradigm, we are interested in finding the PDF of the random vector ξ 0 , i.e., the onedimensional PDFs {ρ 1 0 , . . ., ρ N 1 } appearing in (69), and a new vector of coefficients α such that the output of the neural network (with the same weights and biases) averaged over all realizations of the noise ξ 0 , approximates a new one-dimensional map h(x), different from (65).For this example, we choose In the presence of noise, the neural network output takes the form (see Eq. ( 61)) where R(ξ 0 ) = [−0.4,0.4] 5 is the range of ξ 0 , and We approximate the 5-dimensional integral in (71) with a Gauss-Legendre-Lobatto (GLL) quadrature formula [22] on a tensor-product grid with 6 quadrature points per dimension.To this end, let {z[1], . . ., z [6]} be the GLL quadrature points in [−0.4,0.4].The tensor product quadrature approximation of (71) takes the form where H = 6 5 = 7776 is the total number of quadrature points 6 in the domain [−0.4,0.4] 5 , θ k are tensor product GLL quadrature weights, and represents a grid in [−0.4,0.4] 5 indexed by {i 1 (j), . . ., i 5 (j)}, where i k (j) ∈ {1, . . ., 6} for each j and each k.Such indices are obtained by an appropriate ordering of the nodes in the tensor product grid.We represent each one-dimensional PDFs ρ k 0 (z) using a polynomial interpolant through the GLL points, i.e., where l j (z) are Lagrange characteristic polynomials associated with the one-dimensional GLL grid.Thus, the degrees of freedom of each PDF are represented by the following vector of PDF values at the GLL nodes Note that in this setting we are approximating the PDF of ξ 0 using a non-parametric method, i.e., a polynomial interpolant through a tensor product GLL grid.For non-separable PDFs, or for PDFs in higher dimensions, it may be more practical to consider a tensor representation [12,11], or a parametric inference method, i.e., a method that leverages assumptions on the shape of the probability distribution of ξ 0 .The training data is shown with red circles.In the training over noise paradigm we add random noise to the output of the first layer and optimize for α and the PDF of the noise as in (64).This can be seen as an instance of transfer learning, in which we keep the neural network weights and biases fixed but update the output weights α and the PDF of the random vector ξ0 to approximate a different function h(x) defined in (70).
At this point, we have all the elements to solve the minimization problem (64), or an equivalent problem defined by the discrete 2-norm subject to the linear constraints7 In Figure 5 we demonstrate the training-over-weight and the training-over-noise paradigms for the neural network depicted in Figure 4.In the classical training over weight paradigm we minimize the error between the neural network output (66) and the function (65) in the discrete 2-norm (67).The training data is shown with red circles.In the training over noise paradigm we add random noise ξ 0 to the output of the first layer.This yields the input-output map (71).By optimizing for the PDF of the noise ρ 0 and the coefficients α as in (64) we can re-purpose the network previously trained on f (x) to approximate a different function h(x) defined in (70), without changing the neural network weights and biases.
In Figure 6 we plot the one-dimensional PDFs of each component of the random vector ξ 0 we obtained from optimization.Such PDFs depend on the neural network weights and biases, which in this example are kept fixed.The PDF of ξ 0 is (by hypothesis) a product of five one-dimensional PDFs.Therefore it is  64) for the function h(x) defined in Eq. (70) (see Figure 5).The PDF of ξ0 is a product of all five PDFs (see Eq. ( 69)).The degrees of freedom of each PDF, i.e., the vectors defined in equation ( 76), are visualized as red dots (PDF values at GLL points).Each PDF is a polynomial of degree at most 5.We also show that the ensemble average of the network output computed over 10 5 independent samples of (79) converges to q2(x), as expected.
quite straightforward to sample ξ 0 by using, e.g., rejection sampling applied independently to each onedimensional PDF shown in Figure 6.With the samples of ξ 0 available, we can easily compute samples of the neural network output as Clearly, if we compute an ensemble average over a large number of output samples then we obtain an approximation of q 2 (x).This is demonstrated in Figure 7.

Random shifts
A related but simpler setting for re-purposing a neural network is to introduce a random shift in the input variable rather than perturbing the network layers 8 .In this setting, the output of the network can be written as where q 2 is defined in (57), and ρ is the PDF of vector η defining the random shift x → x − η.Clearly, equation ( 60) is the expectation of the noiseless neural network output q 2 (x) under a random shift with PDF ρ(y).To re-purpose a deterministic neural net using random shifts in the input variable one can proceed by optimization, i.e., solving an optimization problem similar to (64) for a target function h(x).
Remark: Given a target function h(x) we can, in principle, compute the analytical solution of the integral equation using Fourier transforms.This yields9 where F[•] denotes the multivariate Fourier transform operator However, the function ρ(y) defined in (82) is, in general, not a PDF.

The Mori-Zwanzig formulation of deep learning
In section 3 we defined two linear operators, i.e., N (n, q) and M(n, q) in equations ( 13) and ( 19), mapping the probability density of the state X n and the conditional expectation of a phase space function u(X n ) forward or backward across different layers of the neural network.In particular, we have shown that Equation (84) maps the PDF of the state X j forward through the neural network, i.e., from the input to the output as n increases, while (85) maps the conditional expectation backward.We have also shown in section 3.2 that upon definition of we can rewrite (85) as a forward propagation problem, i.e., where G(n, q) = M(L − n, L − q) and M is defined in (19).The function q n (x) is defined on the domain i.e., on the range of the random variable Eqs. (85) constitute the basis for developing the Mori-Zwanzig (MZ) formulation of deep neural networks.The MZ formulation is a technique originally developed in statistical mechanics [41,64] to formally integrate out phase variables in nonlinear dynamical systems by means of a projection operator.One of the main features of such formulation is that it allows us to systematically derive exact equations for quantities of interest, e.g., low-dimensional observables, based on the equations of motion of the full system.In the context of deep neural networks such equations of motion are Eqs.( 84)-(85), and (87).
To develop the Mori-Zwanzig formulation of deep learning, we introduce a layer-dependent orthogonal projection operator P n together with the complementary projection Q n = I −P n .The nature and properties of P n will be discussed in detail in section 6.For now, it suffices to assume only that P n is a self-adjoint bounded linear operator, and that P 2 n = P n , i.e., P n is idempotent.To derive the MZ equation for neural networks, let us consider a general recursion, where {g n , R(n + 1, n)} can be either {p n , N (n + 1, n)} or {q n , G(n + 1, n)}, depending on the context of the application.

The projection-first and propagation-first approaches
We apply the projection operators P n and Q n to (89) to obtain the following coupled system of equations By iterating the difference equation (91), we obtain the following formula 10 for where Φ R (n, m) is the (forward) propagator of the orthogonal dynamics, i.e., Since g n = R(n, 0)g 0 , and g 0 is arbitrary, we have that (94) implies the operator identity A substitution of (94) into (90) yields the Mori-Zwanzig equation .
We shall call the first term at the right hand side of (97) streaming (or Markovian) term, in agreement with classical literature on MZ equations.The streaming term represents the change in P n g n as we go from one layer to the next.The second term is known as "noise term" in classical statistical mechanics.The reason for this definition is that Φ R (n, 0)Q 0 g 0 represents the effects of the dynamics generated by Q m R(m, m − 1), which is usually under-resolved in classical particle systems and therefore modeled as random noise.Such noise, however, is very different from the random noise {ξ 0 , . . ., ξ L−1 } we introduced into the neural network model (1).The third term represents the memory of the neural network, and it encodes the interaction between the projected dynamics and its entire history.Note that if g 0 is in the range of P 0 , i.e., if P 0 g 0 = g 0 , then the second term drops out, yielding a simplified MZ equation, To integrate (98) forward, i.e., from one layer to the next, we first project g m using P m (for m = 0, . . ., n), then apply the evolution operator R(n + 1, n) to P n g n , and the memory operator Φ R to the entire history of g m (memory of the network).It is also possible to construct an MZ equation based on the reversed mechanism, i.e., by projecting R(n + 1, n)g n rather than g m .To this end, re-write (90) as i.e., the propagation via R(n + 1, n) precedes projection (propagation-first approach).By applying the variation of constant formula (96) to (99) we arrive at a slightly different (though completely equivalent) form of the MZ equation, namely

Discrete Dyson's identity
Another form of the MZ equation (97) can be derived based on a discrete version of the Dyson identity 11 .To derive this identity, consider the sequence By using the discrete variation of constant formula, we can rewrite (103) as 11 For continuous-time autonomous dynamical systems the Dyson's identity can be written as [60,61,63,55,6] where L is the (time-independent) Liouvillian of the system.The discrete Dyson identity and the corresponding discrete MZ formulation was first derived by Dave et.al in [10], and later revisited by Lin and Lu in [35].Both these derivations are for autonomous (time-invariant) discrete dynamical systems, while our derivations also apply to non-autonomous systems, such as those generated by neural networks.
Similarly, solving (102) yields where Φ R is defined in (95).By substituting (105) into (104) for both y n and y m , and observing that y 0 is arbitrary, we obtain The operator identity (106) is the discrete version of the well-known continuous-time Dyson's identity.A substitution of (106) into g n = R(n, 0)g 0 yields the following form of the MZ equation ( 97) (107) Here we have arranged the terms in the same way as in (97).

Mori-Zwanzig equations for probability density functions
We have seen that the PDF of the random vector X n can be mapped forward and backward through the neural network via the transfer operator N (q, n) in (13).Replacing R with N in (97) yields the following Mori-Zwanzig equation for the PDF of X n Alternatively, by using the MZ equation (107), we can write where

Mori-Zwanzig equation for conditional expectations
Next, we discuss MZ equations in neural nets propagating conditional expectations backward across the network, i.e., from q 0 (x) = u(x) into q L (x) = E{u(X L )|X 0 = x}.To simplify the notation, we denote the projection operators in the space of conditional expectations with the same letters as in the space of PDFs, i.e., P n and Q n 12 .Replacing R with G in (97) yields the following MZ equation for the conditional expectations where Equation ( 113) can be equivalently written by incorporating the streaming term into the summation of the memory term Alternatively, by using Eq. ( 107) we obtain Remark: The Mori-Zwanzig equations ( 108)-( 109) and ( 113)-( 116) allow us to perform dimensional reduction within each layer of the network (number of neurons per layer, via projection), or across different layers (total number of layers, via memory approximation).The MZ formulation is also useful to perform theoretical analysis of deep learning by using tools from operator theory.As we shall see in section 7, the memory of the neural network can be controlled by controlling the noise process {ξ 0 , ξ 1 , . . ., ξ L−1 }.

Mori-Zwanzig projection operator
Suppose that the neural network model ( 2) is perturbed by independent random variables {ξ n } with bounded range R(ξ n ).In this hypothesis, the range of each random vector X m , i.e.R(X m ), is bounded.In and Ω m is clearly a bounded set if R(ξ m−1 ) is bounded.With specific reference to MZ equations for scalar conditional expectations (i.e., conditional averages of scalar quantities of interest) and recalling that we define the following orthogonal projection operator 13 on L 2 (R(X L−m )) Since P m is, by definition, an orthogonal projection we have that P m is idempotent (P 2 m = P m ), bounded, and self-adjoint relative to the inner product in L 2 (R(X L−m )).These conditions imply that the kernel K L−m (x, y) is a symmetric Hilbert-Schmidt kernel that satisfies the reproducing kernel condition Note that the classical Mori's projection [61,60] can be written in the form (120) if we set where {η m 0 , . . ., η m M } are orthonormal functions in L 2 (R(X L−m )).Since the range of X L−m can vary from layer to layer we have that the set of orthonormal functions {η m j (x)} also depends on the layer (hence the label "m").The projection operator P m is said to be non-negative if for all positive functions v( [17].An example of a kernel defining a non-negative orthogonal projection is More generally, if K L−m (x, y) is any square-integrable symmetric conditional probability density function on R(X L−m ) × R(X L−m ), then P m is a non-negative projection.

Analysis of the MZ equation
We now turn to the theoretical analysis of the MZ equation.In particular, we study the MZ equation for conditional expectations discussed in section 5.4, i.e., equation (113).Clearly, the operator Q m G(m, m − 1) plays a very important role in such an equation via the memory operator Φ G defined in (114).Indeed, Φ G appears in both the memory term and the noise term, and is defined by operator products involving Q m G(m, m − 1).
In this section, we aim at determining conditions on Q m G(m, m − 1) = (I − P m )G(m, m − 1), e.g., noise level and distribution, such that In this way, the operator Q m G(m, m − 1) becomes a contraction, and therefore the MZ memory term in (113) decays with the number of layers, while the noise term decays zero.Indeed, if (124) holds true, then the norm of memory operator Φ G (n, m) defined in (114) (similar in ( 115) and ( 116) ) decays with the number of "QG" operator products taken, i.e., with the number of layers.

Deterministic neural networks
Before turning to the theoretical analysis of the operator Q m G(m, m − 1), it is convenient to dwell on the case where the neural network is deterministic (no random perturbations), and has tanh() activation functions.This case is quite common in practical applications, and also allows for significant simplifications of the MZ framework.First of all, in the absence of noise the output of each neural network layer has the same range, i.e., R where N is the number of neurons, assumed to be constant for each layer.Hence, we can choose a projection operator (120) that does not depend on the particular layer.For simplicity, we consider where Here {η 0 , . . ., η M } are orthonormal functions in L 2 [−1, 1] N , e.g., normalized multivariate Legendre polynomials [57].We sort {η k } based on degree lexicographic order.In this way, the first N +1 orthonormal functions in (127) are explicitly written as Moreover, if the neural network has linear output we have q 0 (x) = α • x and therefore This implies that the noise term in the MZ equation ( 113) is zero for the projection kernel (127)-(128) and networks with linear output.
To study the MZ memory term we consider a simple example involving a two-layer deterministic neural net with d-dimensional input x ∈ Ω 0 ⊆ R d and scalar output q 2 (x).The MZ equation (113) with projection operator (126)-(128) can be written as Clearly, if q 1 is approximately in the range of P (i.e., if q 1 Pq 1 ) then the neural network is essentially memoryless (the memory term in (130) drops out).The next question is whether the nonlinear function q 1 can indeed be approximated accurately by Pq 1 .This is a well-established result in multivariate polynomial approximation theory.In particular, it can be shown that Pq 1 converges exponentially fast to q 1 as we increase the polynomial degree in the multivariate Legendre expansion (i.e., as we increase M in (127) 14 ).Exponential convergence follows immediately from the fact that the function admits an analytical extension on a Bernstein poly-ellipse enclosing [−1, 1] N (see [57] for details).The projection of the nonlinear function q 1 (x) onto the linear space spanned by the N + 1 orthonormal basis functions (128) (i.e., the space of affine functions defined on [−1, 1] N ) can be written as where the coefficients {β 0 , . . ., β N } are given by Hence, if q 1 is approximately in the range of P (i.e., Pq 1 q 1 ), then we can explicitly write the MZ equation (130) as q 2 (x) Note that this reduces the total number of degrees of freedom of the two-layer neural network from N (N + d + 3) to N (d + 2) + 1, under the condition that q 1 in equation ( 131) can be accurately approximated by the hyperplane Pq 1 in equation ( 132).This depends of course on the weights W 1 and biases b 1 in (131).
In particular, if the entries of the weight matrix W 1 are sufficiently small then by using Taylor series it is immediate to prove that Pq 1 q 1 .
An example: In Figure 8 we compare the MZ streaming and memory terms for the two-layer deterministic neural network we studied in section 4 and the target function (65).Here we consider N = 20 neurons, and approximate the integrals in (133) using Monte Carlo quadrature.Clearly, it is possible to constrain the norm of the weight matrix W 1 during training so that the nonlinear function q 1 in ( 131) is approximated well by the affine function Pq 1 in (132).This essentially allows us to control the approximation error and therefore the the amplitude of the MZ memory term in (130).For this particular example, we set W 1 ∞ ≤ 0.1, which yields the following contraction factor Note that (135) is not the operator norm of QG(2, 1) we defined in (124).In fact, the operator norm requires computing the supremum of QG(2, , not just the linear function v = q 0 If training-over-weights of deterministic nets is done in a fully unconstrained optimization setting then there is no guarantee that the MZ memory term is small.
The discussion about the approximation of the MZ memory term can be extended to deterministic neural networks with an increasing number of layers.For example, the output of a three-layer deterministic neural network can be written as Note that if Pq 1 is a linear function of the form (132), then the term G(3, 2) [I − P] G(2, 1)Pq 1 has exactly the same functional form as the MZ memory term G(2, 1) Hence, everything we said about the accuracy of a linear approximation of α • tanh(W 1 x + b 1 ) can be directly applied now to G(2, 1)Pq On the other hand, if q 1 can be approximated with accuracy by the linear function Pq 1 , then the term G(2, 1)[q 1 − Pq 1 ] is likely to be small.This implies that the last term in (136) is likely to be small as well (bounded operator G(3, 2) applied to the difference between two small functions).In other words, if the weights and biases of the network are such that q 1 (x) = α • tanh(W n x + b n ) can be approximated with accuracy by the linear function (132) then the MZ memory term of the three-layer network is small.More generally, by using error estimates for multivariate polynomial approximation of analytic functions [57], it is possible to derive an upper bound for the operator norm of QG(m, m − 1) in (124).Such a bound is rather involved, but in principle it allows us to determine conditions on the weights and biases of the neural network such that QG(m, m − 1) ≤ κ, where κ is a given constant smaller than one.This allows us to simplify the memory term in (113) by neglecting terms involving a large number of "Q m G(m, m − 1)" operator products in (114).Hereafter, we determine general conditions for the operator Q m G(m, m − 1) to be a contraction in the presence of random perturbations.

Stochastic neural networks
Consider the stochastic neural network model (2) with L layers, N neurons per layer, and transfer functions F n with range in [−1, 1] N for all n.In this section we determine general conditions for the operator Q m G(m, m − 1) to be a contraction (i.e., to satisfy the inequality (124)) independently of the neural network weights.To this end, we first write the operator Q m G(m, m − 1) as where The conditional density p L−m+1|L−m (y|x) = ρ L−m (y − F L−m (x, w L−m )) is defined on the set As before, we assume that K L−m is an element of L 2 (R(X L−m ) × R(X L−m )) and expand it as15 where c m is a real number and η m i are zero-mean orthonormal basis functions in L 2 (R(X L−m ), i.e., Lemma 1.The kernel (140) satisfies the idempotency requirement (121) if and only if where λ(R(X L−m )) is the Lebesgue measure of the set R(X L−m ).
Clearly, if G(m, m − 1) is itself a contraction and Q m is an orthogonal projection, then the operator product Q m G(m, m − 1) is a contraction.In the following Proposition, we compute a simple bound for the operator norm of Q m G(m, m − 1).
Proposition 2. Let Q m be an orthogonal projection in L 2 (X L−m ).Suppose that the PDF of ξ L−m , i.e.
where λ(Ω L−m ) is the Lebesgue measure of the set Ω L−m defined in (117) and Proof.The last statement in the Proposition is trivial.In fact, if Q m is an orthogonal projection then its operator norm is less or equal to one.Hence, Moreover, the operator norm of G(m, m − 1) can be bounded as (see Eq. (A.28)) Hence, which completes the proof of (144).
The upper bound in (144) can be slightly improved using the definition of the projection kernel K L−m .This is stated in the following Theorem.
The upper bound in (149) is independent of the neural network weights.
Proof.The function γ L−m (y, x) defined in ( 138) is a Hilbert-Schmidt kernel.Therefore, The L 2 norm of γ L−m can be written as (see (138)) By using (147), we can write the first term at the right hand side of (151) as A substitution of the series expansion (140) into the second term at the right hand side of (151) yields Here, we used the fact that the basis functions η m k (x) are zero-mean and orthonormal in R(X L−m ) (see Eq. ( 141)).Similarly, by substituting the expansion (140) in the third term at the right hand side of (151) we obtain 2 Combining ( 151)-(154) yields At this point we use the Cauchy-Schwarz inequality 16 and well-known properties of conditional PDFs to 16 The inequality in (158) follows from the Cauchy-Schwarz inequality.Specifically, let bound the integral in the second term and the integrals in the last summation, respectively, as and By combining ( 155)-( 159) we finally obtain which proves the Theorem.

Remark:
The last two terms in (155) represent the L 2 norm of the projection of ρ L−m onto the or- ), then by using Parseval's identity we can write (151) as where {η M +1 , η M +2 . ..} is an orthonormal basis for the orthogonal complement (in L 2 (R(X L−m ))) of the space spanned by the basis {λ(R(X L−m )) −1/2 , η m 1 , . . ., η m M }.This allows us to bound (159) from below (with a nonzero bound).Such lower bound depends on the basis functions η m k , on the weights w L−m as well as on the choice of the transfer function F L−m .This implies that the bound (149) can be improved, if we provide information on η m k and the activation function F .Note also that the bound (149) is formulated in terms of the Lebesgue measure of Ω L−m , i.e., λ(Ω L−m ).The reason is that λ(Ω L−m ) depends only on the range of the noise (see definition (117)), while λ (R(X L−m )) depends on the range of the noise, on the weights of the layer L − m, and on the range of X L−m+1 .

Lemma 4. Consider the projection kernel (140) with c
In particular, if Proof.The proof follows immediately from equation (149).
The upper bound in (162) is a slight improvement of the bound we obtained in Appendix A, Lemma 19.

Contractions induced by uniform random noise
Consider the neural network model ( 2) and suppose that each ξ n is a random vector with i.i.d.uniform components supported in [−b n , b n ] (b n > 0).In this assumption, the L 2 (R(ξ L−m )) norm of ρ L−m appearing in Theorem 3 and Lemma 4 can be computed analytically as where N is the number of neurons in each layer.For uniform random variables with independent components it straightforward to show that the Lebesgue measure of the set Ω L−m defined in (117) and appearing in Lemma 4 is i.e., A substitution of ( 164) and (165) into the inequality (162) yields Upon definition of n = L − m this can be written as A lower bound for the coefficient b 0 can be set using Proposition 20 in Appendix A, i.e., With a lower bound for b 0 available, we can compute a lower bound for each b n (n = 1, 2, . ..) by solving the recursion (168) with an equality sign.This is done in Figure 9 for different user-defined contraction factors κ 17 .It is seen that for a fixed number of neurons N , the noise level (i.e., a lower bound for b n ) that yields operator contractions in the sense of increases as we move from the input to the output, i.e., For instance, for a neural network with layers and N = 100 neurons per layer the noise amplitude that induces a contraction factor κ = 10 −4 independently of the neural network weights is b 0 0.55.This means 17 To compute the lower bounds of bn we solved the recursion (168) numerically (with an equality sign) for bn using the Newton method.To improve numerical accuracy we wrote the left hand side of (168) in the equivalent form bn(bn + 1) that if each component of the random vector ξ 0 is a uniform random variable with range [−0.55, 0.55] then the operator norm of Q 2 G(2, 1) is bounded by 10 −4 .Moreover, we notice that as we increase the number of neurons N , the smallest noise amplitude that satisfies the operator contraction condition converges to a constant value that depends on the layer n but not on the contraction factor κ.Such asymptotic value can be computed analytically.Proof.The proof follows immediately by substituting the identity into (168).

Fading property of the neural network memory operator
We now discuss the implications of the contraction property of Q m G(m, m − 1) on the MZ equation.It is straightforward to show that if Proposition 2 or Lemma 4 holds true then the MZ memory and noise terms in (113) decay with the number of layers.This property is summarized in the following Theorem.Theorem 6.If the conditions of Lemma 4 are satisfied, then the MZ memory operator in Equation (114) decays with the number of layers in the neural network, i.e., i.e., the memory operator Φ G (n, 0) decays exponentially fast with the number of layers.
Proof.The proof follows from Q m+1 G(m + 1, m) 2 ≤ κ and equation (114).In fact, for all n ≥ m + 1 This result can be used to approximate the MZ equation of a neural network with a large number of layers to an equivalent one involving only a few layers.A simple numerical demonstration of the fading memory property (177) is provided in Figure 8 for a two-layer neural deterministic network.The fading memory property allows us to simplify terms in the MZ equation that are smaller than others.The most extreme case would be a memoryless neural network, i.e., a neural network in which the MZ memory term is zero.Such network is essentially equivalent to a one-layer network.To show this, consider the MZ equation (115) in the case where the neural network is deterministic.Suppose that the L 2 projection operator P is the same for each layer and it satisfies (I − P)q 0 = 0, i.e., q 0 is in the range of P. Then the output of the memoryless network, with input x ∈ Ω 0 ⊆ R d , tanh() activation function, L layers, N neurons per layer, can be written as where Pq L−1 = β • η(x) and η = [η 0 (x), . . ., η M (x)] T is a vector of orthonormal basis functions on Regarding what types of input-output maps can be represented by memoryless neural networks, the answer is provided by the universal approximation theorem for non-affine activation functions of the form (179).We emphasize that there is no information loss associated with the fading MZ memory property as the MZ equation is formally exact.However, if we approximate the MZ equation by neglecting small terms then we may lose some information.
We have seen that the memory operator Φ G (m, 0) decays exponentially fast with the number of layers if the operator Q m G(m, m − 1) is a contraction (see Lemma 4) .Specifically, we proved in Theorem 6 that where κ is a contraction factor our choice 18 .Hereafter we show that the magnitude of each term at the right hand side of (180) can be controlled by κ independently of the neural network weights.In principle, this allows us to approximate a deep stochastic neural network using only a subset of terms in (180).
where B is defined in (183).

Summary
We developed a new formulation of deep learning based on the Mori-Zwanzig (MZ) projection operator formalism of irreversible statistical mechanics.The new formulation provides new insights on how information propagates through neural networks in terms of formally exact linear operator equations, and it introduces a new important concept, i.e., the memory of the neural network, which plays a fundamental role in low-dimensional modeling and parameterization of the network (see, e.g., [32]).By using the theory of contraction mappings, we developed sufficient conditions for the memory of the neural network to decay with the number of layers.This allowed us to rigorously transform deep networks into shallow ones, e.g., by reducing the number of neurons per layer (using projections), or by reducing the total number of layers (using the decay property of the memory operator).We developed most of the analysis for MZ equations involving conditional expectations, i.e., Eqs.(113)-(116).However, by using the well-known duality between PDF dynamics and conditional expectation dynamics [14], it is straightforward to derive similar analytic results for MZ equations involving PDFs, i.e., Eqs.(108)-(109).Also, the mathematical techniques we developed in this paper can be generalized to other types of stochastic neural network models, e.g., neural networks with random weights and biases.

Figure 3 :
Figure 3: Sketch of the forward/backward integration process for probability density functions (PDFs) and conditional expectations.The transfer operator N (n + 1, n) maps the PDF of Xn into the PDF of Xn+1 forward through the neural network.On the other hand, the composition operator M maps the conditional expectation E [u(XL)|Xn+1 = x] backwards to E [u(XL)|Xn = x].By defining the operator G(n, m) = M(L − n, L − m) we can transform the backward propagation problem for E [u(XL)|Xn = x] into a forward propagation problem for qn(x) = E [u(XL)|XL−n = x].

Figure 4 :
Figure 4: Sketch of the stochastic neural network model used approximate the functions (65) (training over weight paradigm) and (70) (training over noise paradigm).The five-dimensional random vector ξ0 is assumed to have statistically independent components.We proceed by first training the neural network with no noise on the target function f (x) defined in (65).Subsequently, we perturb the network with the random vector ξ0, and optimize the PDF of ξ0 so that the conditional expectation of the neural network output, i.e., (71), approximates a second target function h(x) for the same weights and biases.

Figure 5 :
Figure5: Demonstration of "training over weights" and "training over noise" paradigms.In the training over weights paradigm we minimize the dissimilarity measure (67) between the output (66) of the two-layer neural network depicted in Figure4(with ξ0 = 0) and the target function (65).The training data is shown with red circles.In the training over noise paradigm we add random noise to the output of the first layer and optimize for α and the PDF of the noise as in(64).This can be seen as an instance of transfer learning, in which we keep the neural network weights and biases fixed but update the output weights α and the PDF of the random vector ξ0 to approximate a different function h(x) defined in (70).

Figure 6 :
Figure6: Training over noise paradigm.One-dimensional probability density functions of each component of the random vector ξ0 obtained by solving the optimization problem(64) for the function h(x) defined in Eq. (70) (see Figure5).The PDF of ξ0 is a product of all five PDFs (see Eq. (69)).The degrees of freedom of each PDF, i.e., the vectors defined in equation (76), are visualized as red dots (PDF values at GLL points).Each PDF is a polynomial of degree at most 5.

Figure 7 :
Figure7: Training over noise paradigm.Samples of the stochastic neural network output (79) corresponding to random samples of ξ0.We also show that the ensemble average of the network output computed over 10 5 independent samples of (79) converges to q2(x), as expected.

Figure 8 :
Figure 8: Comparison between the MZ streaming and memory terms for the two-layer deterministic neural network we studied in section 4 and the target function (65).Here we consider N = 20 neurons, and approximate the high-dimensional integrals in (133) by using Monte Carlo quadrature.The neural network is trained by constraining the entries of the weight matrix W1 as W1 ∞ ≤ 0.1.This allows us to control the approximation error q1 − Pq1 L 2 ([−1,1] N ) when projecting the nonlinear function (131) onto the space of affine functions (132) which, in turn, controls the magnitude of the MZ memory term.

Theorem 3 .
Let K L−m be the projection kernel (140) with c m = 1/λ(R(X L−m )).Then the operator norm of Q m G(m, m − 1) can be bounded as

Figure 9 :
Figure 9: Lower bound on the noise amplitude (168) versus the number of neurons (N ) for λ(Ω0) = 1 (Lebesgue measure of domain Ω0 defining the neural network input), and different user-defined contraction factors κ.With these values of bn the operator QL−nG(L−n, L−n−1) is a contraction satisfying QL−nG(L − n, L − n − 1) 2 ≤ κ regardless of the neural network weights and biases.

Lemma 5 .
Consider the neural network model (2) and suppose that each perturbation vector ξ n has i.i.d.components distributed uniformly in [−b n , b n ].The smallest noise amplitude b n that satisfies the operator contraction condition (173) satisfies the asymptotic result lim independently of the contraction factor κ and Ω 0 (domain of the neural network input).