Computing Methods in the Analysis of Road Accident Reconstruction Uncertainty

The study is dedicated to the problem of uncertainty in the analysis of accident situations in road traffic. The term “uncertainty” is generally known when used with reference to measurement techniques, but its application to the analyses of accident situations in road traffic, including accident reconstruction, is a relatively new field of knowledge. The objectives of this work include the presentation and examination of selected aspects related to the taking of uncertainty into account when analysing the course of an accident and making the necessary calculations. Apart from the scientific objectives, an important utilitarian goal may also be pointed out. The data and methods presented may be used by automotive technology experts in their accident reconstruction work. The paper shows seven methods that enable the taking into account of the uncertainty of the data used for calculations, i.e. extreme values method, total differential method, higher-order total differential method, finite-difference method, Gauss method, method based on the description of stochastic processes, and Monte-Carlo method. Apart from formal (mathematical) descriptions of the methods, an example of their use for the estimation of uncertainty of selected quantities that describe an accident situation has been demonstrated. The bad and good points of individual methods have been shown in the context of the application considered.


Purposes of Analysing Road Accidents
When road accidents and collisions are examined, they may be either treated as a mass phenomenon or analysed individually. The examinations are carried out, above all, to get to know the nature of such incidents (whether considered in mass or individual terms) in order to identify their reasons and, afterwards, to take actions aimed at improving the road traffic safety in the future. The analyses of this kind are used by institutions responsible for the shaping of the transport safety system. A separate group of the examinations consists of the investigations carried out to ascertain the accident circumstances that would enable the identification of the perpetrators and those to blame for the accident. In this case, the analyses are chiefly used by law-enforcement authorities (prosecutors, courts, etc.).
One of the elements of the analysis of an accident (collision) that has taken place is the "accident reconstruction", i.e. an attempt to reconstruct the course of what happened. The reconstruction results may be of crucial importance, especially for the participants in the incident. Such results provide grounds for the law-enforcement authorities to formulate procedural motions as regards accident perpetrators and for the court to make a decision about the guilt and to pass a sentence. It should be stressed here that, intrinsically, the analysis is carried out after the incident has taken place. The forensic expert who prepares the opinion, using his/her knowledge and the trace evidence collected at the incident site (including the results of post-incident measurements), making definite assumptions regarding the values of the parameters that describe the incident, and using the methods available to him/her, carries out a series of operations in the form of calculations and inferences in order to determine the quantities that are important for identifying the accident reasons. Such quantities may describe the pre-incident behaviour of the participants, the motion of the vehicle or vehicles involved, or other important circumstances.
Due to the purpose, the reliability of the expert's opinion issued is essential. A matter of great importance is competence of the investigators, adequacy of the tools used for the accident reconstruction, and appropriate selection of the parameter values assumed. The uncertainty of the opinion is a somewhat different issue. Intrinsically, only approximate values of most parameters can be assumed. Therefore, a question arises about the accuracy of the parameter values determined in the accident reconstruction process or, in other words, about the uncertainty of determining the values of the quantities that are important in terms of the reconstruction purposes. This is the basic thread of this work.

The Notion of Uncertainty
The term "uncertainty" is used in many fields of science, where its meaning may be different. It is used in the decision theory, which is one of the branches of mathematics and finds application in very different areas, such as statistics, information science, engineering problems (optimization), psychology, sociology, economy (management), or medicine. In general, the uncertainty is defined as a state (situation) where the decisions made may produce various effects, with the probabilities of such effects being unknown [15]. The term "uncertainty" is firmly established in the fields of metrology and measurement techniques. Here, this term may be considered in its broader sense, as a set of general doubts about measurement results. However, it is more often understood "in the narrower meaning", i.e. as a parameter describing the limits of variation in measurement results. In the document Guide to expression of uncertainty in measurement [16], the notion of uncertainty is defined as parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand.
In the formulation of expert's opinions about road accidents, the uncertainty of calculation results will always be involved. This will be related to both the uncertainty of the data assumed and the uncertainty of the computing tools used. With some simplification, the uncertainty of results of accident reconstruction calculations may be considered as corresponding to the notion of uncertainty of an indirect measurement in measurement technology. It may be assumed that the uncertainty of calculation results obtained during an analysis (reconstruction) of a road accident (or, in more general terms, accident situation) will be a parameter (or a set of parameters) describing the possible dispersion of values of the quantity (or quantities) determined by the calculations.
In terms of usefulness, the accident reconstruction uncertainty is often associated with the reconstruction reliability. These two notions are not identical with each other. The uncertainty should be understood as defined above, while the reliability is related to the confidence that the reconstruction result (whether the uncertainty has or has not been determined) is correct. A formal description of determining the reliability has been proposed in [24], where the reliability has been defined, in most simplified terms, as the probability that the reconstruction is true, with using the probabilistic structure of the Bayes network.

Objective and Scope of the Study
Limiting ourselves to the part related to purely computational problems, we may present this in the form of a simple diagram ( Fig. 1): the expert has a set of data describing the accident under analysis (Data), runs calculations with using a method that is available or chosen in consideration of the nature of the incident and the actual purpose of the analysis (Tools), and obtains a specific result (Results). As an example: if the problem under analysis is the vehicle braking process and the quantity to be found is the vehicle stopping distance S z , the set of input data may consist of initial velocity of vehicle motion (V 0 ), braking deceleration (a h ), driver reaction and braking system response time (t r ), and deceleration rise time (t n ). As the computing method, any method may be used that would be suitable for transforming the data set into the vehicle stopping distance S z to be determined, e.g. the analytical formulas known from the fundamentals of the mechanics of vehicle motion (such as e.g. given by [20]).
From the analysis objective point of view, the most important issue is the final analysis result. It is burdened with a definite uncertainty, stemming from the uncertainty of the input data and from the uncertainty generated by the computing method used. As regards the data, the uncertainty may come from different sources. Some of the data may be taken from measurements carried out at the accident site and this is the case where classic measurement uncertainty, both of random and systematic type, is encountered. Some other data, however, are assumed by the expert who runs the calculations because either appropriate measurements are impracticable for technical, organizational or economic reasons or such data cannot be directly applied. A good example may be here the driver reaction time, because its value may vary within very wide limits depending on many diverse factors describing e.g. the complexity of the traffic situation, passing psychophysical condition of the driver, etc.
As regards the computing methods, the uncertainty arises from the models and other mathematical tools used, which represent the real phenomena only in a simplified way. Even if the true values of individual parameters of the computational model adopted are used, the result obtained is only an approximation of the true value to be found. Simultaneously, the uncertainty resulting from the use of a specific method is not necessarily correlated with the degree of complexity of the model employed. Here, expert's knowledge and skills are important for the appropriate selection and use of a model that would best suit the problem under analysis, in respect of the uncertainty as well.
In this study, attention will be focused on the first source of uncertainty and the considerations will be dedicated to the methods that would make it possible to take the uncertainty of the input data into account in the calculations.

Road Accident Reconstruction Methods
In most general terms, the said methods may be divided into two categories: • those using mathematical models of the man-vehicleenvironment system; • those using data recorded by "black box" type devices, i.e. Event Data Recorders (EDR).
The former is the basic one. The methods of the other category were unavailable until quite recently. The first automotive EDRs appeared in mid 1990s, but they still have not become widespread equipment of motor vehicles.
The models met at present are characterized by very different degrees of complexity, varying from simple analytical models to sophisticated systems where more or less complicated simulation programs must be used. In the complex simulation programs, multiple partial models most frequently occur, which represent various subsystems or components of the man-vehicle-environment system, constituting together a test environment designed for specific purposes.

Sources of Uncertainty in the Reconstruction of Road Accidents
As shown in the schematic diagram in Fig. 1, the accident reconstruction calculations are carried out for a certain set of data. In the case of mathematical models being used, the input data are assumed by the expert; if EDR data are available, then the values recorded are often used as the input. Two basic sources of uncertainty of the input data may be distinguished: • measurement uncertainty of the quantities measured; • uncertainty of the parameter values assumed (referred to as statistical uncertainty).
For data measurement results, the uncertainty sources may be all the factors that are characteristic for the specific Fig. 1 Schematic diagram illustrating the main issues of the process of accident reconstruction measurement techniques (see e.g. [16]), i.e.: incomplete definition of the measurand, uncertainty related to the carrying out of the measurement (including errors of the method and measuring system used, non-representativeness, errors caused by environmental impact, reading errors, approximation and simplification errors), or uncertainty of measuring instruments.
However, the carrying out of full-scope measurements on all the objects involved, whether at the incident site or anywhere else, is hardly possible or actually impracticable. This is due to the scale of such a task, because the number of the quantities to be measured (e.g. the number of data to be introduced to simulation programs) may be of the order of several hundred or even more. The measurement of some parameters may be infeasible (as an example, this applies to many parameters that describe the collision process, such as the vehicle body stiffness curve or characteristics of other parts or objects damaged during the collision). The measurement of many other parameters might be possible, but this would require a lot of complicated and costly work (an example might be such inertial parameters of a vehicle as the location of the centre of vehicle mass or the moments of inertia of vehicle body solid or road wheels). Therefore, a significant number of data are assumed by the expert, based on technical documentation of the vehicles involved, simplified models used to estimate the values of the quantities in question, expert's experience, or specialized literature.
There is also a specific category of parameters that are measurable but actually cannot be measured or can be measured in exceptional situations only. Simultaneously, such parameters are often critical from the point of view of the course of the incident. As regards vehicle motion, two parameter groups should be pointed out here (this has already been mentioned in Sect. 1.3): one of them is related to characteristics of the tangential tyre-road interaction (in simplified terms, the tyre-road adhesion characteristics) and the other one is related to the description of human (vehicle driver or pedestrian's) behaviour.
Another source of uncertainty is the tool used to transform the set of input data into a set of the analysis results sought. In the case of classic calculations, such a tool is the computing method employed, i.e. the mathematical method of the phenomenon under analysis. This type of uncertainty is referred to as modelling uncertainty. Its estimation is based on the data obtained from validation or experimental verification of the model.
To recapitulate: the uncertainty of the calculation results obtained in an analysis of accident situations is a function of the uncertainty of the input data taken for the calculations (burdened with measurement uncertainty or uncertainty stemming from specific attributes of the data) and the uncertainty of the computing tool. A separate problem is the method of transforming the uncertainty of the input data into the uncertainty of the calculation results to be found, i.e. the method of taking the data uncertainty into account. Depending on the selection of this method (including its applicability to the specific computing method used to analyse the situation), different uncertainty of the calculation results may be obtained at the same uncertainty of the input data.

Review of the Literature Dealing with Uncertainty in Road Accident Analysis and Objective of the Study
The problem of uncertainty in the analysis of road accidents, although encountered from the very outset of accident reconstruction attempts, has actually been addressed in the scientific literature for quite a short time. The first publications where reference is directly made to the issues of uncertainty in the field of analyses of accident situations in road traffic date back to the first half of 1990s. These were American works Uncertainty in Accident Reconstruction Calculation [5] and The Technique of Uncertainty Analysis as Applied to the Momentum Equation for Accident Reconstruction [22]. In both of them, some analytical methods that made it possible to determine the uncertainty of the results obtained and the applications of such methods to simple calculations related to the accident reconstruction (estimation of the stopping distance, estimation of the pre-impact velocities) have been presented. An important item is the publication Uncertainty Analysis for Forensic Science [8], where the authors present fundamentals of the uncertainty calculus (including the probability theory and sensitivity analysis) from the point of view of the applicability of such a calculus to the preparation of forensic experts' opinions, including those related to accident analysis. To date, many publications have come out that raise these problems. Apart from the works mentioned above, various methods of taking the uncertainty of data into account have been considered. The use of the total differential method has been discussed e.g. in [25]. In [2], the finite-difference method has been used to estimate the uncertainty. Numerous publications have dealt with the use of the Monte-Carlo method [1,9,10,12,17,26,27]. The technique where elements of the DoE (Design of Experiments) theory are used is also employed [6]. A probabilistic approach to uncertainty may be found in [11], where the uncertainty is defined as conditional probability. The publications [3,13] cover the issue of measurement uncertainty at the reconstruction of motor vehicle collisions. The estimation of uncertainty with employing "interval arithmetics" and the technique where elements of the DoE theory are used is also considered in the literature [28]. In [18], the point estimation method has been presented as a probabilistic tool for determining the uncertain parameters of a vehicle collision. The issues concerning the uncertainty of accident reconstruction calculations have also been indirectly touched upon in [21], where the sensitivity of the calculated values of the vehicle velocity change (ΔV) to vehicle and impact parameters is discussed, or in [23], where the coherence of data recorded in the accident database is analysed. A reference to this problem has also been made in [4], where a method has been presented that makes it possible to reduce the uncertainty of the estimated velocity of a pedestrian crossing the road.
The above shows that there are many methods of determining the uncertainty of calculations. Hence, a question arises about the comparability of results of such calculations. A discussion about this matter has already been presented in [19]. In this study, the authors return to this issue, with increasing the number of the methods considered. With reference to the schematic diagram shown in Fig. 1 herein: • seven useful methods of transforming the uncertainty of input data into the uncertainty of calculation results have been presented, together with formal descriptions; • an example of their use has been demonstrated, with comparing the uncertainties obtained by different methods.

Theoretical Foundations of the Seven Methods
First, let us assume that an adequate data set and a tool (mathematical model) making it possible to calculate the quantities to be found is available. To generalize, let us adopt a matrix notation as a more convenient form, with treating the set of input data as a data vector and the set of calculation results as a result vector: The following vectors are to be found: y = [y 1 , y 2 , …, y n ] T and Δy = [Δy 1 , Δy 2 , …, Δy n ] T ; the latter is the vector of absolute uncertainty of the estimation of vector y components.
When the absolute uncertainty is normalized in relation to the nominal value, a relative uncertainty is obtained: y rel = [ y 1 ∕y 1 , y 2 ∕y 2 , … , y n ∕y n ] T In the measurement uncertainty theory, two basic approaches are discerned, where the uncertainty is determined with using: • a deterministic model, also referred to as "interval model", where the notion of probability is not involved and the uncertainty value (Δy i , i = 1, …, n) having been determined is the uncertainty bound (maximum); • a probabilistic (or statistical) model, where the result (y i , i = 1, …, n) is intrinsically a random variable and its uncertainty is measured by the dispersion of its distribution; in most cases, the parameters used as measures are standard deviation ("standard uncertainty") or its multiple ("expanded uncertainty").
In four sub-items below, deterministic methods will be presented, i.e. upper and lower bounds method (or extreme values method-EVM), first-order and second-order total differential method (TDM and TDM2, respectively), and finite-difference method (FDM); three probabilistic methods, i.e. Gauss method (PrM), method based on the description of stochastic processes (PrStM), and Monte-Carlo method (MCM), will be described in the next sub-items.

Upper and Lower Bounds Method (EVM)
In the upper and lower bounds method (or extreme values method), an assumption is made that the value of the quantity to be found, i.e. the value of a component of vector y, lies between the minimum and maximum values obtained by substitution of the minimum and maximum values of vector x components.
A measure of the uncertainty of the quantity y to be found is the difference: A graphic interpretation of the uncertainty determined by means of the extreme values method has been shown in Fig. 2, based on an example with a function of a single variable.
An important assumption made in this method is the requirement of monotonicity of function y i = f i (x j ) on the interval of vector x component values under analysis (this is a prerequisite for the truth of the statement about the extreme values of vector y components at the ends of the intervals defined by the x min/max values). Depending on the monotonicity type, y imin/max will be treated as a function of x jmin or x jmax : If the function y i = f i (x j ) is not monotonic on the intervals defined by the x min/max values, local extremums must be identified for the y min /y max extreme values to be determined.

Total Differential Method (TDM)
Here, the nominal values of vector x components ( In the total differential method, the uncertainty of determining vector y components can be found by using the notion of first-order sensitivity coefficient and the total differential: In the matrix notation, this may be written as follows: A graphic interpretation of the uncertainty determined by means of the total differential method has been illustrated in Fig. 3. It should be noted that in this method, the uncertainty is determined by linearization of function f i (x 1 , …, The uncertainty vector Δy = [Δy 1 , Δy 2 , …, Δy n ] T defines the maximum values of errors in estimating vector y components, i.e. the uncertainty bound. For linear models y i = f i (x j ), this method becomes identical with the extreme values method.
This method is convenient, but it only produces good results when relations f i (x j ) are characterized by relatively small changes in the sensitivity coefficient W ij in the interval x j ±Δ x j under interest. Its basic good point is the fact that it directly includes elements of sensitivity analysis, which makes it possible to identify the parameters whose impact on calculation results is more or less considerable.
One of the weak points of determining the uncertainty with the use of formulas (7) or (8) may be the unreasonably "extended" uncertainty range, hindering its practical use in estimating the uncertainty (this will be demonstrated (7)  Fig. 3 Illustration of uncertainty in the total differential method in a calculation example; however, the same may be said about the EVM). This applies in particular to the situations where many data x j are burdened with uncertainty and the "effects" of individual uncertainties (formulas (7) or (8)) are summed up due to the nature of the method. As mentioned previously, this method determines the uncertainty bound if an assumption is made that the situation where all the data take the values at the ends of their intervals can occur with a probability identical to that of any other situation. In practice, such a case is hardly realistic. Therefore, to determine the uncertainty by this method, a procedure is sometimes run that is similar to that adopted for complex measurement uncertainties and a statistical model. In such a case, the uncertainty is assumed as a vector sum of uncertainty components and this is a "combined standard uncertainty" determined in accordance with the "law of propagation of uncertainty" (also referred to as "uncertainty propagation rule") [8,16]: Sometimes, the uncertainty thus determined is called "mean square uncertainty", e.g. in [26]. To differentiate, the uncertainty defined by (7) or (8) will be denoted here by TDM M ("maximum uncertainty" or "uncertainty bound") while that defined by (9) will be denoted by TDM S ("mean square uncertainty").

Higher-Order Total Differential Method (TDM2)
In the classic total differential method described above, the function y = f(x) is linearized. In the case of non-linear relations, when considerable changes in the sensitivity coefficient W ij occur in the interval x j ±Δ x j under interest (at a significant non-linearity), the uncertainty determined will be burdened with an error (cf. Figs. 2 and 3).
Formulas (7) and (8) may be derived by expanding the function y = f(x) into a Taylor series: Hence, the following will be obtained: If only the term with the first-order derivative is taken into account then, after absolute values are introduced to make individual equation terms independent of the sign of the derivative values, a relation described by formula (7) will be obtained. If the terms with the second-order derivatives are also taken into account then an equation defining the uncertainty by the second-order total differential method TDM2 will be formulated: (11) Δy Illustration of the difference between uncertainties determined by the total differential methods TDM and TDM2 where , i = 1, …, n and j, k = 1, …,

m.
Coefficients W (2) ijk are coefficients of the second-order sensitivity of the ith quantity to the jth and kth parameter. In qualitative terms, the difference between the TDM and TDM2 methods has been illustrated in Fig. 4. For linear models y i = f i (x j ), this method becomes identical with the extreme values method and the first-order total differential method.
Equation (10) may also be used to derive formulas for determining uncertainty with taking into account the higherorder terms. However, this is of limited practical importance in real applications. For functions of multiple variables, the number of partial derivatives (sensitivity coefficients) becomes very big. As an example: two first-order and three second-order sensitivity coefficients have to be determined for a function of two variables, while for a function of six variables, the numbers of such coefficients will rise to 6 and 21, respectively (the number of the second-order coefficients will be equal to the number of 2-combination with repetitions on an m-element set). It should also be noted that if the uncertainty is determined by such a method with using total differentials of an order higher than 1 (one) then the uncertainty value obtained will always be raised and this will considerably reduce the usefulness of the said method.

Finite-Difference Method (FDM)
The finite-difference method of uncertainty calculation is in practice a simplified version of the total differential method. Here, the partial derivatives do not have to be determined in analytical form. As it is in the TDM case, the uncertainty formula is derived by expanding the function into a Taylor series (see Eq. 10), with the series being confined to firstorder terms only. The partial derivative (sensitivity coefficient) values are estimated with using a difference quotient and replacing the derivative with the ratio of increments: where δx j -sufficiently small increment of the x j value; δy jincrement of the function value caused by δx j .
The uncertainty formula has a form similar to that of (7): For linear models y i = f i (x j ), this method intrinsically becomes identical with the methods presented previously.
Here, the option of determining the uncertainty as a vector sum of uncertainty components is also used, as it is in the TDM case: The δx j value is arbitrarily selected (therefore, adequate experience of the person who runs the calculations would be welcome). It should be such that the partial derivative value could be satisfactorily approximated. According to [8], the δx j value should be initially assumed as about 0.01x j(0) and then gradually reduced, if necessary, until it no longer affects the uncertainty level Δy j obtained.

Gauss Probabilistic Method (PrM)
The uncertainty determination methods described above are categorized as deterministic. In such an approach, any combination of values x j falling into intervals x j(0) ±Δ x j , j = 1, …, m is considered as equally probable. In consequence, the uncertainty of calculations may be overestimated. To take into account the fact that some variants of such combinations (e.g. a situation that all the x j values would be at the ends of intervals x j(0) ±Δ x j ) may occur with a low probability, the probabilistic nature of the quantities under analysis should be regarded.
In the probabilistic methods, an assumption is made that the components of vector x: x j , j = 1, …, m are random variables with known probability distributions. In consequence, the components of vector y: y i , i = 1, …, n defined by a functional relation y = f(x) are also random variables and the probability distribution of vector x determines the distribution of vector y. However, the analytical determination of the latter when the numbers of components of vectors x and y exceed 2 and the functional vector f is non-linear is a complicated problem, solvable in some specific cases only. In the applications under consideration, therefore, it is justified to use a simplified method, which may be found in the literature items dealing with measurement uncertainty, including [16], or analyses of accident situations, such as [7] or [8], in the calculus of errors, such a method is referred to as "Gauss method" or just "statistical method".
The said method is based on the following assumption: if the quantity to be found is a function of vector x: y = f(x) and the components of vector x: x j , j = 1, …, m are described as independent random variables with normal probability distribution N xj (x j , xj ) , where x is the mean value and x is the standard deviation, then y i , i = 1, …, n is a random variable with normal probability distribution N yi (ȳ i , yi ) and the mean value ȳ i is a function of the mean values of vector x components: The standard deviation yi may be expressed by the following formula (identical with the formula of combined standard uncertainty [16]: The uncertainty of the quantity to be found may be determined for any confidence level.

Method Based on the Description of stochastic processes (PrStM)
This method is a generalization of the PrM method. It may be employed when the mathematical model is explicitly dependent on time. In general terms, such a model is a system of differential equations having the following general form: where y = [y 1 , y 2 , y 3 , …, y n ] T -vector of state coordinates; When stochastic processes are introduced to the model, Eq. (18) may take a general form: , g ij = g i (y j ,t) and X t = [X t1 , X t2 , X t3 , …, X tm ] T -vector of an m-dimensional stochastic process. The equation solving methods depend on the equation form and the nature of the stochastic processes. A good point of the approach presented is the fact that the results are obtained in the form of complete probabilistic characteristics of the parameters sought, determined for any instant that may be freely chosen. On the other hand, the difficulty of obtaining an analytical solution makes a serious limitation; significant simplifications (linearization methods, simplifications of the nature of the stochastic processes) are often indispensable even for models that are not very complicated. A necessity also arises to determine characteristics of the stochastic process. In the case of processes compatible with the correlation theory of stochastic processes, the function describing the expected value and the correlation function should be known, while the latter is generally very difficult to be determined. Therefore, the applications of this method to the problems under consideration are very restricted (nevertheless, an example application will be presented in Sect. 4). (16)

Monte-Carlo Method (MCM)
The Monte-Carlo technique is now one of the most powerful computing tools used in analyses of the phenomena and processes that cannot be described by analytical models due to their complexity. It works very well especially in the computational problems where random phenomena should be taken into account. In general terms, its essence lies in repeating an experiment many times with test parameter values being changed at random within a range defined by the specific type of the experiment and the phenomenon examined. Due to the iterative nature of this technique, it is counted among simulation methods. For this reason, the term "Monte-Carlo simulation" can often be found in the literature (see e.g. [8,9,26]).
For the issues in question, this method makes it possible to find the probability distributions sought, with using a model predetermined as a function y = f(x), representing the phenomenon under analysis. The components of vector x: x j , j = 1, …, m are assumed to be random variables with known characteristics (determined theoretically or empirically).
The random variables y i , i = 1, …, n are determined by multiple numerical calculations made according to the predetermined relation y = f(x) for computer-generated pseudorandom numbers x j in accordance with appropriate distributions of the specific quantities. This method may also be employed when simulation models are used. With this objective in view, multiple simulations are carried out for randomly generated values of individual model parameters. The possible range of solutions y i is obtained on the grounds of pseudorandom statistical distributions of variables y i , generated as described above. The uncertainty measures are the measures of dispersion of the statistical pseudo-distributions of y i , thus obtained.
This method makes it possible to avoid the difficulties mentioned in sub-items 3.2.5 and 3.2.6. A considerable impact on the correctness of the results obtained is exerted by the quality of the pseudorandom-number generators (measured by the finite quantity of numbers in the generator cycle). Noteworthy is also the fact that, in a degenerated form i.e. in calculations carried out only for the extreme values of x j distributions and at an assumption of monotonicity of y i = f(x j ), this method is equivalent to the extreme values method (EVM).

Calculation of the Uncertainty of Estimation of the Vehicle Stopping Distance
The calculations will be made for one of the standard problems in accident situation analyses, i.e. for the vehicle braking process. This example has also other good points: it may be described by a simple analytical and, simultaneously, good mathematical model. On the other hand, the parameters of this model describe all the components of the man-vehicle-road system and their values are taken, in a significant part, from literature knowledge (they are burdened with statistical uncertainty).
The work with the mathematical model is started from a simplified time history of the process of vehicle braking on an even horizontal road, as shown in Fig. 5. Assuming additionally that the vehicle is braked with the tyre-road adhesion forces being fully utilized, we may state that the maximum braking deceleration value a hm is: where μ [-]-tyre-road adhesion coefficient (peak or sliding); g ≅ 9.81 m/s 2 -acceleration of gravity.
If the initial braking speed V 0 (m/s) (the vehicle speed at the instant t 0 = 0) and the t r , t n , and a hm values (see Fig. 5) (20) a hm = ⋅ g are known then the stopping distance may be expressed by a simplified formula: Thus, a functional relation y = f(x) has been obtained, 1 2 /(2gx 2 ) (with an assumption adopted that the g value is certain). The calculations are made to determine the stopping distance y = [y 1 ] ≡ [S z ] and the uncertainty of determining its value Δy = [Δy 1 ] ≡ [ΔS z ], with an assumption adopted that the uncertainty values Δx = [Δx 1 , Δx 2 , Δx 3 , Δx 4 ] T ≡ [ΔV 0 , Δa h , Δt r , Δt n ] T are known.
The uncertainty will be calculated with using the 7 methods described previously, i.e. EVM, TDM, TDM2, FDM, PrM, PrStM, and MCM for a common data set. The data set adopted has been given in Table 1. It represents typical road conditions, described below. The initial braking speed has been assumed as equal to the speed limit applicable to built-up areas, with a (21)  10% tolerance (as an allowance for e.g. accuracy of speedometer readings and driver's errors in taking the readings). The tyre-road adhesion coefficient value μ assumed corresponds to dry asphalt road surface; in this case, the uncertainty has been assumed as being quite low-see the data given in the literature dealing with the mechanics of motor vehicle motion and accident reconstruction, e.g. [7,20]. As regards the total system response time and the braking deceleration rise time, the data have been adopted in a similar way and the parameter values and their uncertainties are at a realistic level.
In three methods (TDM, TDM2, PrM), appropriate partial derivatives (sensitivity coefficients) must be determined. For the mathematical model described by Eq. (21), they will have the form as given in Table 2.

Upper and Lower Bounds Method (EVM)
According to Eq. (3), the extreme values may be determined from the following formulas (thanks to the simple form of function S z = f(V 0 , μ, t 0 , t n ), its monotonicity is known): where in terms of symbols: For the comparability with the other methods to be maintained, the uncertainty has been assumed as a half of the difference between S zmax and S zmin : The relative uncertainty is the ratio of (23) to the arithmetic average of S zmax and S zmin : Table 2 Sensitivity coefficients of the 1st order ( W S z j ) and 2nd order ( W (2) S z jk ) and their values for the nominal set of parameters a According to Schwarz's theorem, mixed partial derivatives do not depend on the differentiation order (they have an identical form): ∂ 2 f/∂x j ∂x k = ∂ 2 f/∂x k ∂x j . Therefore, they are only determined for the pairs j, k such that k > j. The occurrence of such a pair of derivatives is reflected in Eq. (10) by multiplier "2" in the terms with mixed derivatives

Method Based on the Description of Stochastic Processes (PrStM)
The time history of the vehicle braking deceleration has been assumed as having a form similar to that adopted previously (see Fig. 6). Three characteristic phases have been discerned, taking place in the time intervals denoted by t r , t n , and t a , where t a represents the time of braking with the braking force being fully developed. It has been assumed that in the third phase, the braking deceleration is a sum of a defined function of time f(t) (a "trend") and a stochastic process X a (t): Moreover, it has been assumed that: • X a (t)-stationary (in the broad sense) normal stochastic process with mean value of m Xa , variance of v X a = 2 X a , and known correlation function K Xa (τ); • trend is a function having the following form: (35) If the system response phase (t r ) and deceleration rise phase (t n ) are taken into account and the inequality 0 ≤ t′ < t r + t n holds, then V p and S p become dependent random variables.
A complete description of the solution shown above may be found in [14]. Without going into detail, the solutions obtained in this case may be proven to be normal random processes. Figure 7 shows time histories of the solutions in the form of mean values of the distance travelled (S), vehicle speed (V), and longitudinal vehicle acceleration (a) and the corresponding time histories of standard deviations σ S , σ V , and σ a . These curves have been obtained for the parameter values corresponding to the data given in Table 1:

Monte-Carlo Method (MCM)
For these calculations, a special computer program MCM has been written, where the function described by formula (21) has also been implemented. The model parameters values V 0 , μ (or a hm ), t r , and t n may be generated as random (pseudorandom) numbers whose distributions would be programmed as functions (e.g. normal, exponential, or uniform) or be empirically based. The data taken for calculations corresponded to those specified in Table 1. The histograms representing the distribution of stopping distance S z have been shown in Figs. 8 and 9. Figure 8 describes the situation where all the data (V 0 , t r , t n , μ) were treated as random variables with truncated normal distribution (the numbers generated could not differ from the mean by more than treble standard deviation). The situation with these data being treated as random variables with uniform (rectangular) distribution has been illustrated in Fig. 9. As it can be seen, the results in both cases resemble in their shape the curve representing a truncated normal distribution, but with different standard deviation values (higher in the latter case). Actually, however, none of the distributions can be considered a truncated normal one. In both cases, it can be seen that the distribution curve is slightly asymmetric, with the mode being shifted towards the lower S z values. This is due to a nonlinearity of the relation represented by Eq. (20).

Summary of the Results
The calculation results have been summarized in Table 3. They will be discussed in Sect. 4.2.

Comparison Between the Seven Methods Used
To facilitate the comparison between the uncertainties estimated with using different methods, the calculation results specified in Table 3 have been presented graphically in Fig. 10 in the form of stopping distance ranges. The following conclusions may be drawn from the results presented: • The ranges of the solutions obtained with using the deterministic methods where the maximum uncertainty is estimated (EVM, TDM M , TDM2, FDM M ) do not considerably differ from each other. The highest value has been obtained for the TDM2 method, where the uncertainty is bigger by about 6% than the uncertainty calculated for the TDM method. Histogram of stopping distance S z for input data with uniform distribution • The results calculated with using the probabilistic methods PrM, PrStM significantly differ from those obtained from the deterministic methods. The stopping distance ranges are much narrower, which is advantageous from the point of view of usefulness in accident reconstruction. • A similar effect may be obtained by using deterministic methods and calculating the mean square uncertainty (TDM S , FDM S ). • The ranges determined by the Gauss probabilistic method (PrM) and the probabilistic method based on the description of stochastic processes (PrStM) are close to each other. This means that in the case under consideration and in similar problems, the PrM method, being relatively simple in comparison with the PrStM, will be sufficient for determining the probability distribution of the quantity sought. • The ranges determined by the Monte-Carlo probabilistic method (MCM) depend on types of the data probability distribution. In general, they are wider than those obtained from the other probabilistic methods (PrM, PrStM). When the input data are treated as random variables with uniform distribution (MCM-u), the range calculated is close to that determined with using the deterministic methods EVM, TDM M , and FDM M . For the data being treated as having normal distribution (MCMn), a narrower range has been obtained, which may be interpreted as an effect of coming closer to the PrM and PrStM methods. Hence, a hypothesis may be formulated that he MCM method is a compromise between the deterministic methods TDM and EVM and the probabilistic methods PrM and PrStM in terms of both their applicability and the reliability of the results obtained. The MCM method may also be considered a good reference for verifying the results obtained with using other methods.
Based on the results obtained in the calculation example, a statement may be made that the introduction of data uncertainty to the calculations causes big differences between the Fig. 10 Comparison of the possible solution ranges at determining the stopping distance S z for the adopted set of input data (Table 1) and 7 uncertainty estimation methods (EVM-extreme values method, TDM-total differential method, TDM2-second-order total differential method, FDM-finite-difference method, PrM-Gauss probabilistic method, PrStM-probabilistic method based on the description of stochastic processes, MCM-Monte-Carlo probabilistic method; defining symbols added to "TDM" and "FDM": Mmaximum uncertainty, S-mean square uncertainty; defining symbols added to MCM: n-data with normal distribution, u-data with uniform distribution) results of such calculations and the "nominal" results (i.e. the results obtained without taking the data uncertainty into account). Such an effect can be observed for each of the methods used to estimate the impact of the said inaccuracies.
In consideration of the above and the fact that the data uncertainty values taken for the calculations were not too high, the following general conclusion may be drawn: a failure to take the data uncertainty into account may result in the construction of an untrue hypothesis about the course of a specific accident situation and the wrong hypothesis may translate into unfair legal consequences to be borne by the participants in such a situation. As an example: if, say, the minimum safe value of the distance between the vehicle and the obstacle at the initial instant were 40 m then, without taking the uncertainty into account, a judgment might be formulated that the driver should manage to stop the vehicle and the collision would not take place. The taking of the uncertainty into account, regardless of the method of determining it, would cause such a statement to be unprovable.
It is difficult to show unambiguously which of the uncertainty determination methods should be considered the best. The selection depends to a considerable extent on the model (simulation or analytical) adopted to analyse the phenomenon observed and on the determinability of the input data (e.g. parameters of the random data distribution). To select the method, the limitations of each of them described in Sect. 3 and the above conclusions drawn from results of the example application of the methods should be taken into consideration.

Conclusion
The calculations carried out at the accident reconstruction are burdened with uncertainty. A failure to take the uncertainty into account in the calculations may considerably affect the expert's opinion about the course of the incident under analysis. Correct determination of the uncertainty of calculation results and, then, of the opinion as a whole will improve the reliability of the opinion.
In this study, the problems related to determining the uncertainty of calculation results have been discussed. For the tools having the form of mathematical models of vehicle dynamics, a set of methods have been presented that make it possible to determine the uncertainty of calculation results stemming from the uncertainty of the data taken as an input. The uncertainty determining methods available, known in great measure from the area of uncertainty in metrology, are characterized by very different degrees of complexity and by their usefulness to the computing tools used. The example calculations made for the models used at accident reconstruction have shown that each of the methods may produce results differing, even significantly, from each other. It seems reasonable that only the methods should be used that, apart from being applicable to the specific tool employed for the analysis, would enable the obtaining of the lowest uncertainty of the expert's opinion. Unfortunately, they are usually of the probabilistic type. For such methods to be employed, at least the statistical distributions of the quantities taken as the input data must be known. Such a requirement is often difficult to be met in the case of the parameters used for calculations related to accident reconstruction. There is a need to determine the distributions of this kind because of the lack of such data even for the most fundamental parameters used by forensic experts, such as e.g. driver reaction time.

Compliance with Ethical Standards
Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.