The evolution of the law of random processes in the analysis of dynamic systems

The paper presents a method for determining the evolution of the cumulative distribution function of random processes which are encountered in the study of dynamic systems with some uncertainties in the characterizing parameters. It is proved that these distribution functions are the solution of a partial differential equation, whose coefficients can be determined once the dynamic system has been solved, and whose numerical solution can be obtained with the finite difference method. Two simple problems are solved here both explicitly and numerically, then the obtained results are compared with each other.

the routines for dynamic analysis of plane, three-dimensional, or beam and shell-based structures [3], and has been applied to the study of some masonry constructions [4,5]. For these structures, the uncertainties have particular relevance, due to the constitutive characteristics of the material and the geometry.
In this paper, we deduce an equation similar to the one proposed in [2] but which allows calculating the cumulative distribution function directly, instead of the density function, of the chosen random variable. Indeed, while the former is always a locally integrable function, it may happen that the latter is defined only in a distributional sense. The deduction is made using classical theorems of the measure theory; although most notions discussed below can be presented in their intuitive sense, we prefer to give explicit definitions and proofs to avoid misunderstandings. Thus, both the approach proposed in this paper and the one proposed in [2] are rigorously justified. Moreover, with the help of the differential forms and the coarea formula, relations are deduced for the explicit computation of both the probability density function and the cumulative distribution function; these are especially useful if the number of random input parameters is greater than one.
Finally, two examples are presented in which the evolution of the probability distribution of a chosen output parameter is explicitly calculated. The solution is then compared with the numerical one obtained with the MADY code, which has meanwhile been updated with the new numerical procedures presented in this paper.

Background and notations
Let (Ω, A, ℙ) be a probability space. Here Ω is the set of the outcomes, A is the -algebra on Ω made of all the events and ℙ ∶ A → [0, 1] is a probability measure, i.e. a positive measure on (Ω, A) such that ℙ(Ω) = 1. Moreover, let ℝ n and B(ℝ n ) be the n-dimensional Euclidean space and its corresponding Borel -algebra, respectively. A where, as usual, { ∈ B} denotes −1 (B) . The measure X denotes the law of or the image of measure ℙ under [6], i.e. the measure defined on (ℝ n , B(ℝ n )) by and the function F X ∶ ℝ n → [0, 1] , defined by is called cumulative distribution function of X.
Let L n be the Lebesgue measure on (ℝ n , B(ℝ n )) . Measure X is said to be absolutely continuous (with respect to L n ) if X (B) = 0 for every set B ∈ B(ℝ n )) with L n (B) = 0 . In this case, the Radon-Nikodym theorem [6] guarantees the existence of a positive integrable function p X such that for every B ∈ B(ℝ n )) . Function p X is called the (joint) probability density function of the vector random variable X and it holds or If X is not absolutely continuous, then its probability density function does not exist as an integrable function. However, it can be defined as a generalized function by interpreting (2) as a distributional derivative [7].
Let be a vector random variable and f ∶ ℝ n → ℝ m a Borel function; then = f • is again a vector random variable and, for each If has a probability density function p X , then by (1) and (3). Moreover, if f is such that [8] for every Borel subset B of ℝ n , then, in view of (4), Y is (finite and) absolutely continuous with respect to L m and thus it admits a probability density function p Y . In the particular case when n = m and f is a diffeomorphism, i.e. a bijective application such that both f and f −1 are continuously differentiable, then where Jf −1 denotes the Jacobian of f −1 .
More generally, suppose that there exists a countable family (B i ) i∈I of pairwise disjoint open sets of ℝ n , such that (i) the event has probability ℙ(B) equal to 1; (ii) for each i ∈ I , the restriction f i of f to B i is a diffeomorphism of B i onto an open set C i ⊂ ℝ n . Under this assumption, define where is the indicator function of C i . Then A stochastic process { t , t ∈ D} , with D = [0,t] a real interval, is a family of vector random variables indexed by a parameter t, defined on a common probability space (Ω, A, ℙ) and all with their values in (ℝ n , B(ℝ n )) , which is called the state space. By definition, for each t ∈ D , t is an A-misurable function and, for each ∈ Ω , { t ( ), t ∈ D} is a function defined in D that is called sample function, realizzation or trajectory of the process.
Let be z ∈ ℝ and f ∶ ℝ m → ℝ a smooth function. Then, the set if it is not empty, is a regular hypersurface of ℝ m whose orientation is determined by the unit normal vector = ∇f ∕|∇f | . Let us put and denote by S the volume form on {z = f } [9], i.e. the differential form (where dx j means "omit the factor dx j "). Recalling that on the hypersurface {z = f } we have it is easy to verify that, for each j = 1, … , n it turns out that [7] everywhere f x j ≠ 0 holds. For each ≤ n , let H be the Hausdorff measure in ℝ n [10], defined in such a way that H n = L n . Then for every function g which is integrable on {z = f } we have The following useful result is a consequence of the coarea formula. Let f ∶ ℝ m → ℝ be a smooth function such that |∇f ( )| > 0 , and g ∶ ℝ m → ℝ be an integrable function. Moreover, let us put Then and, in particular,

Stochastic dynamic system
The equation of motion of a body, discretized by the finite element method with respect to the space variable is reduced to a system of ODEs where t ∈ D is the time and ,̇ and ̈ are the displacement, velocity and acceleration vector, respectively, which are defined on D and take their values in ℝ n . M is the mass matrix, is the internal force vector (including damping and restoring force), B is the input force influence matrix, is the external excitation vector and 0 and ̇ 0 are the initial displacement and velocity vector, respectively. By introducing the state vector Equation (9) can be rewritten as where If randomness is present, coming from the initial conditions, the excitations, or the properties of the system, a random state equation can be written as [2] where = ( 1 , 2 , … m ) ∈ Λ ⊂ ℝ m is the vector of all random parameters, i.e. the values assumed by the vector random variable ∶ Ω → ℝ m , that is supposed to be time-independent and to have a (joint) probability density function p . If the (deterministic) problem (10) is well posed, then for each choice of and 0 , Eq. (11) has one and only one solution Below, for sake of simplicity, we omit to explicitly indicate the dependence on 0 and write to denote the solution of (11).
In applications, we are interested in considering stochastic processes of the type with and in determining the evolution of the law of Z t . Here ∶ ℝ 2n → ℝ is a deterministic smooth function. To this aim, for each t ∈ D , let us consider the family of measures { t ∶ ∈ Λ} which are defined on ℝ by for every Borel subset B of ℝ , with B the indicator function of B. Measure t is the law of the conditional probability of Z t , given , and sometimes it is denoted by p Z (z| = , t) . Indeed, once and t are fixed, t is the Dirac measure on ℝ , concentrated in the point Z t ( ) . The cumulative distribution function of t is the real function Let C 0 be the space of the continuous real functions that are compactly supported in ℝ , with the maximum norm | ⋅ | C 0 . For every function f ∈ C 0 we have (this is true for the indicator functions of measurable sets by (13) and then for simple functions so that the general case follows from an approximation argument). Therefore, the map → ∫ ℝ f (z) t (dz) is measurable on Λ.
Proposition (i) For fixed t ∈ D , there is one and one measure t on ℝ such that

◻
If we denote by F Z (z, t) the cumulative distribution function of t , with the help of (16) and (14) we can write .
Moreover, under the hypothesis that, for each t ∈ D , Z t is a smooth function with from (18) to (7) it follows where In many applications of interest, for each t ∈ D and B ∈ B(ℝ) , Z t is such that so that in view of (17), it also results t (B) = 0 and therefore t is absolutely continuous with respect to L [and F Z (⋅, t) is an absolutely continuous real function] [6]. Then, by the Radon-Nikodym theorem there exists a probability density function p Z (z, t) of t , i.e. for every integrable function f it holds Thus, in this case from (21) to (8), we deduce For all z and t, {Z t = z} is a regular hypersurface of Λ by (20). Moreover, in view of (5) and (6), we have holds. As will be shown later in the examples, Eqs. (18), (21) and (22), (23) can be used for the explicit calculation of the cumulative distribution function and the probability density function, respectively, even if the dimension of Λ is greater than one.

The evolution of the law of Z t
In [2] the authors obtain a linear PDE in the unknown which once solved allows to determine the probability density function p Z (z, t) of Z t , by integrating p ZΘ over Λ , with respect to . Below we propose a similar method to directly determine the cumulative distribution function F Z (z, t) of Z t . Let's start with some notation.
and K be the indicator function of the region that is (note that, for each pair z and t, region {Z t ≤ z} that has been defined in (19) is a subset of K). Let us denote by D the space of infinitely differentiable real functions with compact support in N.
Consider the two distributions [7] which, for every ∈ D , are defined in N by with =(z, , t) , and where p ZΘ (z, , t) = p ZΘ (z|Θ = , t)p Θ ( ) is a regular hypersurface, because in N we have �∇G� = √ 1 + �∇Z� 2 ≥ 1 . Note that while the first distribution is regular, the second is concentrated on a set that is negligible with respect to the Lebesgue measure.
It can be proved [7] that (G) is the distributional derivative of K , i.e. the distribution such that or also From (25) 1 to (25) 3 we deduce the PDE with the initial conditions which follows from (24) to (12).
Once K has been calculated, F Z is obtained from the relation In a similar way, [7] it is defined the distributional derivative ′ of and it is shown that it results from which the PDE that has been proposed in [2] can be obtained, i.e.
The corresponding initial condition is depending on whether z 0 is deterministic or not. Once p Z has been calculated, p Z is obtained from the relation Thus K and p Z satisfy the same distributional PDE but with different initial conditions. In applications, system (11) and Eqs. (26) [or (28)] are both solved numerically, after which F Z (z, t) is determined by the relation (27) [or p Z (z, t) is determined by (29)]. Two examples are presented below where the whole procedure can be followed explicitly. Let's briefly mention the solution technique for Eq. (26). The analogue for Eq. (28) is dealt with in detail in [2].
Using the method of characteristics [12] for this purpose, we can write the following system of ODEs, with the corresponding initial conditions From (30) 2 we obtain t = s , which in view of (30) 1 provides the following two implications From (30) 3 to (31), taking into account that Z( , 0) = z 0 by (12), we deduce Finally, from (27) we obtain which coincides with (18).

The motion of a material point
Let us consider the rectilinear motion of a material point whose velocity is proportional to the traveled distance. The proportionality coefficient and the initial position are expressed by the two independent positive-valued random variables Θ 1 and Θ 2 , respectively. Distance x is measured in meters and time t in seconds, and a superimposed dot indicates derivation with respect to time. We can write and By separating the variables we obtain from which, assuming that is the identity function, follow. As from (21) we obtain the cumulative distribution function of Z t where p is the joint probability density function of the random variables Θ 1 and Θ 2 . Alternatively, relation (18) can be used, Particularly, we assume that Θ 1 and Θ 2 are two independent variables, each uniformly distributed on the interval [0, 1], so that p is the indicator function of the square Q = [0, 1] × [0, 1] . Then, we obtain ( Fig. 1) from which, deriving with respect to z, we deduce the probability density function Obviously, the same result is obtained by directly using the relations (22) and (23). Thus for example, for 0 < z ≤ 1 , we have ( Fig. 1) according to (33) 1 . Equation (26) and the corresponding initial conditions take the form and respectively.
The numerical solution can be accomplished by means of the following steps.
Step 1 Construct a partition of 1, … , N) , and assign to each i the corresponding value p ( i ) = 1 N . Step 2 Discretize the first quadrant of the z, t plane by choosing a mesh width Δz and a time step Δt , and defining the discrete mesh points (z j , t n ) by Then, for each representative point i , solve the equation with the help of the initial condition by using the total variation diminishing (TVD) method [2,13].
Step 3 Finally, determine the cumulative distribution function by means of the discretized version of the relation (32), i.e. for each mesh point (z j , t n ) compute Figs. 2 and 3 show the numerical results that have been obtained with Δz = 6.25 ⋅ 10 −3 m, Δt = 6.25 ⋅ 10 −4 s . Each side of Λ has been divided into 40 equal subintervals, so that N = 1600. z j = jΔz, t n = nΔt.
In addition to the cumulative function, the probability density function p(z, t) was also numerically calculated by means of Eqs. (28) and (29), at the same four instants previously considered. The calculations have been made for N = 40 × 40 , N = 80 × 80 and N = 160 × 160 . We used Δz = 6.25 ⋅ 10 −3 m and Δt = 6.25 ⋅ 10 −4 s in the first two cases, and Δz = 3.125 ⋅ 10 −3 m and Δt = 3.125 ⋅ 10 −4 s in the third case. The obtained results are shown in Fig. 4 for t = t 1 and t = t 4 ; the other cases are similar.
At each instant t a possible estimate of the relative error is given by the formula where the values of f (z i , t) and g(z i , t) are obtained explicitly and numerically, respectively.
The error was calculated, for 0 ≤ z ≤ 4 , and in all the examined cases its order of magnitude turned out to be practically constant over time. In the case of the cumulative function, the maximum error was less than 1.7 ⋅ 10 −3 . In the case of the density function, it was less than 1.9 ⋅ 10 −1 for N = 40 × 40 , less than 1.4 ⋅ 10 −1 for N = 80 × 80 and less than .9 ⋅ 10 −1 for N = 160 × 160.