Random vibration analysis with radial basis function neural networks

Random vibrations occur in many engineering systems including buildings subject to earthquake excitation, vehicles traveling on a rough road and off-shore platform in random waves. Analysis of random vibrations for linear systems has been well studied. For nonlinear systems, particularly for multi-degree-of-freedom systems, however, there are still many challenges including analyzing the probability distribution of transient responses of the system. Monte Carlo simulation was considered the only viable method for this task. In this paper, We propose a method to construct semi-analytical transient solutions of the probability distribution of transient responses of nonlinear systems by using the radial basis function neural networks. The activation functions consist of normalized Gaussian probability density functions. Two examples are presented to show the effectiveness of the proposed solution method. The transient probability distributions and response moments of these examples are presented, which have not been reported in the literature before.


Introduction
Random vibrations are common in engineering systems. When a system is exposed to random loadings such as airflow, ocean waves and earthquakes, random vibrations occur. Random vibration analysis is an important part of structural and mechanical system design and reliability assessment. Various statistics of the system response such as moments, power spectrum, and probability distribution are often needed for proper design and reliability assessment. Most 1 probability distribution of the response. However, it is a challenge to obtain analytical solutions of the time-varying probability distribution, particularly when the system is nonlinear. This paper presents a radial basis function neural networks (RBF-NN) solution method for computing the time-varying probability density function (PDF) for linear and nonlinear dynamic systems. We further demonstrate the utility of the time-varying PDF for obtaining various response statistics such as response moments.
Neural networks methods for solving partial differential equations have become a popular topic [29,30]. The well-known classical Ritz method is also implemented with the help of neural networks [31]. The finite element solutions are obtained with the help of neural networks [32]. In the present paper, a radial basis function neural networks solution of the FPK equation is proposed. The activation functions consist of Gaussian probability density functions with the means distributed in the state space. This neural networks approximate solution provides a very efficient and accurate way of computing the time-varying probability density function of nonlinear dynamic systems subject to random excitations. There have been many studies of radial basis functions for regression applications and for solving partial differential equations [33]. The authors of [34] conclude that the radialbasis-function networks having one hidden layer and the same smoothing factor in each kernel are broad enough for universal approximation. An excellent review on radial basis function networks for regression and classification can be found in [35]. This paper will study two challenging examples of nonlinear stochastic dynamic systems to validate the proposed solution method. In particular, we shall report the highly accurate time-varying probability distribution and moments of the response of nonlinear stochastic dynamic systems for the first time in the literature.
The rest of the paper is outlined as follows. Section 2 reviews the FPK equation. Section 3 presents the radial basis function neural networks (RBF-NN) solution of the FPK equation. Section 5 presents random vibration analysis of two challenging examples of nonlinear dynamic systems. The accuracy and computational efficiency of the RBF-NN solution method are investigated and compared with Monte Carlo simulations. The method of Gaussian closure is also considered in the comparison studies. Section 6 concludes the paper.

Problem statement
Consider random vibrations of an n degree of freedom nonlinear system subject to Gaussian white noise excitations. By introducing the state variables, the equation of motion can be written in a 2n-dimensional state space. The resulting equations known as the Itô stochastic differential equation are given by, where X j (t) is the jth state variable, and {B k (t)} is an mdimensional vector of independent unit Brownian motions.
The conditional probability density function p(x, t|x 0 , t 0 ) of the system response satisfies the FPK equation subject to the initial condition where L F P K [·] stands for the differential operator of the FPK equation, p(x, t 0 ) is a given probability density function, and b jk = m l=1 σ kl σ jl . m k (x, t) and b jk (x, t) are known as the drift and diffusion terms. Very often, the initial given is given as a delta function p(x, t 0 ) = δ(x − x 0 ), i.e. the system starts at a point x 0 with probability one.
To fully determine the probability density function p(x, t|x 0 , t 0 ) from the FPK equation, we need to specify an initial condition, and proper boundary conditions in the state space. In this work, we shall not consider boundary conditions. Hence, x ∈ R 2n .

RBF neural networks solution
We first present a trial solution for the FPK equation. In particular, we propose a RBF-NN solution with Gaussian activation functions. For convenience, we denote the solution of the FPK equation p(x, t|x 0 , t 0 ) as p(x, t) subject to the initial condition p(x, t 0 ).

Trial solution
Letp(x, w(k)) denote the solution p(x, t) at time step k where w(k) are a set of time-varying undetermined parameters. The solutionp(x, w(k)) consists of the RBF-NN with Gaussian activation functions given by, where N G is the number of Gaussian functions, x ∈ R 2n is the position vector. The model parameters w j form a vector w(k) = [w 1 (k), w 2 (k), . . . , w N G (k)] T . The time depen-dence of the assumed solutionp is through that of the coefficients w(k). The index k indicates the time step such that t = t 0 + kτ , k = 0, 1, 2, . . .. g(x, μ j , j ) is a neuron in the form of a Gaussian function with the mean μ j and covariance matrix j .
We should point out that the radial basis Gaussian function in the literature refers to the special case when j = σ 2 j I, where I is an identity matrix, and || · || denote the norm of a vector, and μ j,i is the ith component of the mean vector μ j . The radial form of the Gaussian functions makes computations a lot easier and is used in the numerical studied reported later.
It is noted that the transient probability density function depends on the initial condition. When the initial probability distribution is highly concentrated such as a delta function δ(x − x 0 ), we can use the short-time analytical approximate solution to compute the probability density over one or a few time steps τ to avoid the need for extremely small meshes.
The RBF-NN PDF solution must satisfy the normalization condition, Because of the proper definition of the Gaussian activation functions such that the normalization condition for the RBF-NN PDF solution becomes Recall the initial condition p 0 (x) of the FPK equation (3). We apply the solution given in Eq. (4) with a proper choice of the mean μ j and covariance matrix j . We obtain the initial coefficients w(0). Let τ be a small time step for integrating the FPK equation forward in time, and w(k) be the coefficient vector at time t = kτ , k = 0, 1, 2, . . ..
We apply the finite difference to compute the time derivative where t is the truncation error of the first order finite difference approximation. Substitute the solution in Eq. (4) and the time derivative in Eq. (10) to Eq. (2). We define a local error of the equation as The explicit expression for s j (x) after all the differentiations are carried out can be obtained symbolically in order to speed up the computation. Note that at step k,p(x, w(k − 1)) is fully determined. The truncation error t can be thought of as being absorbed in the error e(x, w(k)).
We define an integrated squared error together with a Lagrange multiplier λ(k) for the normalization condition in Eq. (9) as We should point out that this step of optimization is important to correct the truncation error so that at every integration time step, the solution is obtained in the sense of the least squared error.
We select w(k) and λ(k) to minimize J . Define a parameter vector as For convenience, we shall denote J (w(k), λ(k)) as J (c(k)). We shall derive necessary conditions for minimization of J (c(k)).

Motivation for choosing Gaussian functions
It is well known that the generalized cell mapping (GCM) with the short-time Gaussian approximation (STGA) is a highly effective method for computing the probabilistic response of nonlinear stochastic systems [1]. In the GCM-STGA method, the one-step transition probability matrix of nonlinear stochastic systems starting from each initial cell has a Gaussian distribution over a short time. The probabilistic response of the stochastic system is fully determined by the one-step transition probability matrix from all the initial cells in a given domain. The GCM-STGA method indicates that at every mapping step, the probabilistic distribution of the response is a linear combination of many Gaussian probability distributions. This is consistent with the RBF-NN solution given in Eq. (4).

Sampling method for integration
Note that the computation of J (c(k)) involving integrations in high dimensional space can be intensive. Instead of integration, we can randomly sample a large number of points x i ∈ D where D ⊂ R 2n is a large enough domain in order to calculate an approximate value of the integrated squared error as where N s is the number of sampled points. The sampled performance index J s (c) can be written in the following form. and The necessary condition for minimizing J s with respect to the vector c reads The optimal coefficient vector is given by when the inverse of the matrix (A+A T ) exists. The existence of the inversion heavily depends on the so-called data matrix S. As is well-known in the regression research [36], the data matrix also determines the overall performance of the RBF-NN PDF solution in Eq. (4).

Response moments
With the probability density function on hand, many response statistics can be computed including the moments. The first and second order moments of the system response at the kth time step can be computed as follows.
The expectation with respect to the jth Gaussian function has the following property.
By definition, we have Hence, we have the correlation matrix, i.e. the second order moments, as Once the first and second moments are obtained, we can compute the response moments of any order. Let us use a typical moment E[X p i X q k ] to illustrate the computational process by using the radial basis Gaussian functions in Eq. (6). Because of the special radial form of the Gaussian functions, it implies that for the jth Gaussian distribution g(x, μ j , j ), the random variables X i and X k can be viewed as being independent for i = k. Hence, the expectation with respect to this Gaussian distribution can be written as That is, we only need to focus on the computation of the moment E[X p i ] with respect to the jth Gaussian distribution. To compute this quantity, we convert it to the central moment as The central moments determined by the jth Gaussian function g(x, μ j , j ) are given by Compute c(k) and set k = k + 1 No Yes Fig. 1 The flowchart of the RBF-NN method This demonstrates another advantage of choosing the true probability density as the basis function. The response moments of any order can be computed without numerical integration in the space R 2n .
The logic of the proposed RBF-NN method for the path integral solution of the FPK equation is summarized in the following flowchart (Fig. 1).

Numerical examples
In this section, we present two numerical examples of vibration analysis to illustrate the application of the proposed RBF-NN PDF solution method. As discussed in [34], the radial-basis-function networks with one hidden layer and the same smoothing factor have the property of universal approximation, meaning that the error of the solution can be sufficiently small when enough terms of the RBF-NN solution are used. In an early study, we have demonstrated the accuracy of the PDF solution. In this paper, we shall not directly address this issue. Instead, we apply the RBF-NN PDF solution to compute transient PDF solution and response moments of the system, and investigate the accuracy of the RBF-NN solutions with the help of Monte Carlo simulations.

Example one
Consider a first order nonlinear system where b > 0, and W 1 (t) denotes the Gaussian white noise excitation with intensity 2D 1 . The drift and diffusion coefficients of the FPK equation are given by, We choose the computational domain in the state space to be D = [−5, 5] and the parameters a = 1, b = 2 and D 1 = 3. The number of Gaussian functions is N G = 101. The means of all the Gaussian functions are at the grid points, while the standard deviations of all the Gaussian functions are taken to be the same constant σ j = h where h is the grid size h = 10/101. The number of sampling point N s = 2N G , which was found to be sufficient. We should point out that when a > 0, the steady-state PDF function of the system is bi-modal because x = 0 is an unstable equilibrium.
Consider an initial condition as x 0 = 0 at t = 0 with probability one. We choose a small time step for integration as τ = 0.01, and construct a short-time Gaussian approximation of the PDF with the mean x 0 + m(x 0 , 0)τ = 0 and variance 2b(x 0 , 0)τ = 0.12. Figure 2 presents evolutions of the response PDF and a comparison of the RBF-NN PDF solution with Monte Carlo simulations. In this example, we generate 10 5 sample time histories in simulations in order to compute all the response statistics reported herein. Simulations take 103 s. As a comparison, the RBF-NN method takes 2.83 s to compute the same set of solutions. The overall rms error of the RBF-NN PDF solution compared to Monte Carlo simulation is 0.0141. The agreement between the two results is indeed excellent.
Let m p = E[X p ] denote the pth order moment of the response. We apply Eqs. (29) and (32) to compute the first and second order moments of the response. Furthermore, we shall compare the moments obtained by the RBF-NN PDF solution with those by Monte Carlo simulations and Gaussian closure approximation.
The Gaussian closure makes use of the following relationships of higher order moments to the lower order ones.
The equations for the first and second order moments of system (37) obtained with the Gaussian closure are given by We first consider a case when a = 1 > 0 and the steadystate PDF function of the system is bi-modal. The bi-modal distribution is certainly far away from being Gaussian. Figure 3 shows the time histories of the first and second order moments of the system response by the the RBF-NN solution, the Gaussian closure method and Monte Carlo simulations. It is seen from the figure that the response moments obtained with the PDF of the RBF-NN solution is highly accurate as compared to Monte Carlo simulations, while the Gaussian closure performs poorly because the underlining process is far away from being Gaussian.
The Gaussian closure method is only good for weakly nonlinear system when the system response distribution is nearly Gaussian. When the parameter a < 0, the response distribution of system (37) is nearly Gaussian with a single peak at x = 0. Let a = −1, b = 0.1 and D 1 = 3. Figure  4 shows the first and second order response moments of the system by the the RBF-NN solution, the Gaussian closure method and Monte Carlo simulations. The performance of the Gaussian closure method improves significantly because the response of the system is nearly Gaussian. Figure 5 shows the steady-state second order moments of the system response as a function of the parameter b. It should be noted that when b is large, even when a > 0, the two-peaks of the probability distribution move closer to each other, resulting a distribution which looks more like a Gaussian function. Hence, as b increases, the error of the Gaussian closure solution decreases. The RBF-NN solution is consistently accurate as compared to Monte Carlo simulations for all the parameters we considered.
In Fig. 5 for a = 1, the average error in m 2 over the range of parameter b of the RBF-NN solution is 0.2580%, while the same error of the Gaussian closure solution is 24.7659%, which is about 96 times bigger than that of the RBF-NN solution. For a = −1, the average error in m 2 over the range of parameter b of the RBF-NN solution is 0.2948%, while the same error of the Gaussian closure solution is 7.4546%, which is about 25 times bigger than that of the RBF-NN solution. In this case, the response is close to be Gaussian.

Example Two
Consider a strongly nonlinear Van der Pol oscillator under external Gaussian white noise excitation.
where β = 1.5 is constant. W 1 (t) denotes the Gaussian white noise excitation with intensity 2D 1 = 0.5. This system has long been a challenging example in the study of stochastic dynamic systems. There has been no report in the literature on the transient response of the PDF. Here, we apply the RBF-NN solution method to obtain the transient probability distribution of the system response and the time-varying moments.
Let us take the domain of interest in the state space as D G = [−3, 3] × [−5.5, 5.5] and partition it into a N G × N G grid where N G = 100. At each grid, there is a Gaussian function. Every Gaussian function is assumed to have the same standard deviation σ j = h where h is the grid size. We have found from computational exercises that it is beneficial to sample points for computing the data matrix in a slightly larger domain than D G . In this example, we take We shall not show the first order response moments of the system because the solutions obtained by all the methods under consideration are nearly zero as they should be. Figure  7 shows the second order response moments of the system by the the RBF-NN solution, the Gaussian closure method and Monte Carlo simulations. It is seen from the figure that the RBF-NN solution is consistently accurate as compared to Monte Carlo simulations, while m 20 and m 02 obtained by the Gaussian closure method have significant steady-state error in addition to poor transient performance. The reason for poor performance of the Gaussian closure is explained by the PDF shapes in Fig. 6. For t > 0 away from the concentrated initial condition, the PDF of the Van der Pol oscillator is far away from being Gaussian.
However, at any time t > 0, the geometrically complex PDF of the Van der Pol oscillator can always be expressed as a linear combination of radial basis Gaussian functions given in Eq. (4) with a proper choice of the means and standard deviations. Furthermore, when N G is sufficiently large, the error of the RBF-NN solution converges to zero [34].
Let us know discuss the solution errors of response moments in reference to the simulation results. The steadystate error averaged over the last 100 data points in m 20 of the RBF-NN solution is 0.00079333, which represents a very small percentage of the response moment. The same error of the Gaussian closure solution is 1.0908. This is over 40% of the response moment m 20 according to the simulation results. The steady-state error averaged over the last 100 data points in m 02 of the RBF-NN solution is 0.0018222, which is also a very small percentage of the response moment. The same error of the Gaussian closure solution is still 1.0908, and is over 40% of the response moment m 02 . The steady-state error averaged over the last 100 data points in m 11 is 0.00072965 of the RBF-NN solution, and 0.00080836 of the Gaussian closure solution. The exact steady-state value of m 11 is known to be zero.

Conclusions
This paper has presented a radial basis function neural networks approach with Gaussian activation functions for constructing the time-varying probability density function of the FPK equation of nonlinear stochastic dynamic systems with high accuracy and computational efficiency. Such a highly accurate solution of time-varying probability density function for nonlinear stochastic dynamic systems enables us to analyze various response statistics of random vibrations of nonlinear dynamic systems. The response moments are considered as an example of applications. The choice of the standard Gaussian probability density functions as activation functions makes computations of other response statistics such as moments highly efficient as illustrated in the paper. The aforementioned features are the pros of the proposed method as compared to other methods such the finite element method, random walk and direct simulations. Since the solution obtained with the help of the proposed method is numerical in nature, compared to the methods such as stochastic averaging and equivalent linearization which deliver approximate solutions in analytical form, it is at a disadvantage.
Two challenging nonlinear dynamic systems are studied with the radial basis function neural networks, including the notorious Van der Pol oscillator, which defies analytical treatments. The time-varying probability density functions as well as response moments of the system are presented which are highly accurate and are computed with short CPU time compared to simulations. It is believed that such accurate time-varying response statistics for nonlinear dynamic systems have not been reported in the literature before.