Abstract
The study is dedicated to the problem of uncertainty in the analysis of accident situations in road traffic. The term “uncertainty” is generally known when used with reference to measurement techniques, but its application to the analyses of accident situations in road traffic, including accident reconstruction, is a relatively new field of knowledge. The objectives of this work include the presentation and examination of selected aspects related to the taking of uncertainty into account when analysing the course of an accident and making the necessary calculations. Apart from the scientific objectives, an important utilitarian goal may also be pointed out. The data and methods presented may be used by automotive technology experts in their accident reconstruction work. The paper shows seven methods that enable the taking into account of the uncertainty of the data used for calculations, i.e. extreme values method, total differential method, higherorder total differential method, finitedifference method, Gauss method, method based on the description of stochastic processes, and MonteCarlo method. Apart from formal (mathematical) descriptions of the methods, an example of their use for the estimation of uncertainty of selected quantities that describe an accident situation has been demonstrated. The bad and good points of individual methods have been shown in the context of the application considered.
Introduction
Purposes of Analysing Road Accidents
When road accidents and collisions are examined, they may be either treated as a mass phenomenon or analysed individually. The examinations are carried out, above all, to get to know the nature of such incidents (whether considered in mass or individual terms) in order to identify their reasons and, afterwards, to take actions aimed at improving the road traffic safety in the future. The analyses of this kind are used by institutions responsible for the shaping of the transport safety system. A separate group of the examinations consists of the investigations carried out to ascertain the accident circumstances that would enable the identification of the perpetrators and those to blame for the accident. In this case, the analyses are chiefly used by lawenforcement authorities (prosecutors, courts, etc.).
One of the elements of the analysis of an accident (collision) that has taken place is the “accident reconstruction”, i.e. an attempt to reconstruct the course of what happened. The reconstruction results may be of crucial importance, especially for the participants in the incident. Such results provide grounds for the lawenforcement authorities to formulate procedural motions as regards accident perpetrators and for the court to make a decision about the guilt and to pass a sentence. It should be stressed here that, intrinsically, the analysis is carried out after the incident has taken place. The forensic expert who prepares the opinion, using his/her knowledge and the trace evidence collected at the incident site (including the results of postincident measurements), making definite assumptions regarding the values of the parameters that describe the incident, and using the methods available to him/her, carries out a series of operations in the form of calculations and inferences in order to determine the quantities that are important for identifying the accident reasons. Such quantities may describe the preincident behaviour of the participants, the motion of the vehicle or vehicles involved, or other important circumstances.
Due to the purpose, the reliability of the expert’s opinion issued is essential. A matter of great importance is competence of the investigators, adequacy of the tools used for the accident reconstruction, and appropriate selection of the parameter values assumed. The uncertainty of the opinion is a somewhat different issue. Intrinsically, only approximate values of most parameters can be assumed. Therefore, a question arises about the accuracy of the parameter values determined in the accident reconstruction process or, in other words, about the uncertainty of determining the values of the quantities that are important in terms of the reconstruction purposes. This is the basic thread of this work.
The Notion of Uncertainty
The term “uncertainty” is used in many fields of science, where its meaning may be different. It is used in the decision theory, which is one of the branches of mathematics and finds application in very different areas, such as statistics, information science, engineering problems (optimization), psychology, sociology, economy (management), or medicine. In general, the uncertainty is defined as a state (situation) where the decisions made may produce various effects, with the probabilities of such effects being unknown [15]. The term “uncertainty” is firmly established in the fields of metrology and measurement techniques. Here, this term may be considered in its broader sense, as a set of general doubts about measurement results. However, it is more often understood “in the narrower meaning”, i.e. as a parameter describing the limits of variation in measurement results. In the document Guide to expression of uncertainty in measurement [16], the notion of uncertainty is defined as parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand.
In the formulation of expert’s opinions about road accidents, the uncertainty of calculation results will always be involved. This will be related to both the uncertainty of the data assumed and the uncertainty of the computing tools used. With some simplification, the uncertainty of results of accident reconstruction calculations may be considered as corresponding to the notion of uncertainty of an indirect measurement in measurement technology. It may be assumed that the uncertainty of calculation results obtained during an analysis (reconstruction) of a road accident (or, in more general terms, accident situation) will be a parameter (or a set of parameters) describing the possible dispersion of values of the quantity (or quantities) determined by the calculations.
In terms of usefulness, the accident reconstruction uncertainty is often associated with the reconstruction reliability. These two notions are not identical with each other. The uncertainty should be understood as defined above, while the reliability is related to the confidence that the reconstruction result (whether the uncertainty has or has not been determined) is correct. A formal description of determining the reliability has been proposed in [24], where the reliability has been defined, in most simplified terms, as the probability that the reconstruction is true, with using the probabilistic structure of the Bayes network.
Objective and Scope of the Study
Limiting ourselves to the part related to purely computational problems, we may present this in the form of a simple diagram (Fig. 1): the expert has a set of data describing the accident under analysis (Data), runs calculations with using a method that is available or chosen in consideration of the nature of the incident and the actual purpose of the analysis (Tools), and obtains a specific result (Results). As an example: if the problem under analysis is the vehicle braking process and the quantity to be found is the vehicle stopping distance S_{z}, the set of input data may consist of initial velocity of vehicle motion (V_{0}), braking deceleration (a_{h}), driver reaction and braking system response time (t_{r}), and deceleration rise time (t_{n}). As the computing method, any method may be used that would be suitable for transforming the data set into the vehicle stopping distance S_{z} to be determined, e.g. the analytical formulas known from the fundamentals of the mechanics of vehicle motion (such as e.g. given by [20]).
From the analysis objective point of view, the most important issue is the final analysis result. It is burdened with a definite uncertainty, stemming from the uncertainty of the input data and from the uncertainty generated by the computing method used. As regards the data, the uncertainty may come from different sources. Some of the data may be taken from measurements carried out at the accident site and this is the case where classic measurement uncertainty, both of random and systematic type, is encountered. Some other data, however, are assumed by the expert who runs the calculations because either appropriate measurements are impracticable for technical, organizational or economic reasons or such data cannot be directly applied. A good example may be here the driver reaction time, because its value may vary within very wide limits depending on many diverse factors describing e.g. the complexity of the traffic situation, passing psychophysical condition of the driver, etc.
As regards the computing methods, the uncertainty arises from the models and other mathematical tools used, which represent the real phenomena only in a simplified way. Even if the true values of individual parameters of the computational model adopted are used, the result obtained is only an approximation of the true value to be found. Simultaneously, the uncertainty resulting from the use of a specific method is not necessarily correlated with the degree of complexity of the model employed. Here, expert’s knowledge and skills are important for the appropriate selection and use of a model that would best suit the problem under analysis, in respect of the uncertainty as well.
In this study, attention will be focused on the first source of uncertainty and the considerations will be dedicated to the methods that would make it possible to take the uncertainty of the input data into account in the calculations.
The Problem of Uncertainty in the Reconstruction of Road Accidents
Road Accident Reconstruction Methods
In most general terms, the said methods may be divided into two categories:

those using mathematical models of the manvehicleenvironment system;

those using data recorded by “black box” type devices, i.e. Event Data Recorders (EDR).
The former is the basic one. The methods of the other category were unavailable until quite recently. The first automotive EDRs appeared in mid 1990s, but they still have not become widespread equipment of motor vehicles.
The models met at present are characterized by very different degrees of complexity, varying from simple analytical models to sophisticated systems where more or less complicated simulation programs must be used. In the complex simulation programs, multiple partial models most frequently occur, which represent various subsystems or components of the manvehicleenvironment system, constituting together a test environment designed for specific purposes.
Sources of Uncertainty in the Reconstruction of Road Accidents
As shown in the schematic diagram in Fig. 1, the accident reconstruction calculations are carried out for a certain set of data. In the case of mathematical models being used, the input data are assumed by the expert; if EDR data are available, then the values recorded are often used as the input. Two basic sources of uncertainty of the input data may be distinguished:

measurement uncertainty of the quantities measured;

uncertainty of the parameter values assumed (referred to as statistical uncertainty).
For data measurement results, the uncertainty sources may be all the factors that are characteristic for the specific measurement techniques (see e.g. [16]), i.e.: incomplete definition of the measurand, uncertainty related to the carrying out of the measurement (including errors of the method and measuring system used, nonrepresentativeness, errors caused by environmental impact, reading errors, approximation and simplification errors), or uncertainty of measuring instruments.
However, the carrying out of fullscope measurements on all the objects involved, whether at the incident site or anywhere else, is hardly possible or actually impracticable. This is due to the scale of such a task, because the number of the quantities to be measured (e.g. the number of data to be introduced to simulation programs) may be of the order of several hundred or even more. The measurement of some parameters may be infeasible (as an example, this applies to many parameters that describe the collision process, such as the vehicle body stiffness curve or characteristics of other parts or objects damaged during the collision). The measurement of many other parameters might be possible, but this would require a lot of complicated and costly work (an example might be such inertial parameters of a vehicle as the location of the centre of vehicle mass or the moments of inertia of vehicle body solid or road wheels). Therefore, a significant number of data are assumed by the expert, based on technical documentation of the vehicles involved, simplified models used to estimate the values of the quantities in question, expert’s experience, or specialized literature.
There is also a specific category of parameters that are measurable but actually cannot be measured or can be measured in exceptional situations only. Simultaneously, such parameters are often critical from the point of view of the course of the incident. As regards vehicle motion, two parameter groups should be pointed out here (this has already been mentioned in Sect. 1.3): one of them is related to characteristics of the tangential tyreroad interaction (in simplified terms, the tyreroad adhesion characteristics) and the other one is related to the description of human (vehicle driver or pedestrian’s) behaviour.
Another source of uncertainty is the tool used to transform the set of input data into a set of the analysis results sought. In the case of classic calculations, such a tool is the computing method employed, i.e. the mathematical method of the phenomenon under analysis. This type of uncertainty is referred to as modelling uncertainty. Its estimation is based on the data obtained from validation or experimental verification of the model.
To recapitulate: the uncertainty of the calculation results obtained in an analysis of accident situations is a function of the uncertainty of the input data taken for the calculations (burdened with measurement uncertainty or uncertainty stemming from specific attributes of the data) and the uncertainty of the computing tool. A separate problem is the method of transforming the uncertainty of the input data into the uncertainty of the calculation results to be found, i.e. the method of taking the data uncertainty into account. Depending on the selection of this method (including its applicability to the specific computing method used to analyse the situation), different uncertainty of the calculation results may be obtained at the same uncertainty of the input data.
Review of the Literature Dealing with Uncertainty in Road Accident Analysis and Objective of the Study
The problem of uncertainty in the analysis of road accidents, although encountered from the very outset of accident reconstruction attempts, has actually been addressed in the scientific literature for quite a short time. The first publications where reference is directly made to the issues of uncertainty in the field of analyses of accident situations in road traffic date back to the first half of 1990s. These were American works Uncertainty in Accident Reconstruction Calculation [5] and The Technique of Uncertainty Analysis as Applied to the Momentum Equation for Accident Reconstruction [22]. In both of them, some analytical methods that made it possible to determine the uncertainty of the results obtained and the applications of such methods to simple calculations related to the accident reconstruction (estimation of the stopping distance, estimation of the preimpact velocities) have been presented. An important item is the publication Uncertainty Analysis for Forensic Science [8], where the authors present fundamentals of the uncertainty calculus (including the probability theory and sensitivity analysis) from the point of view of the applicability of such a calculus to the preparation of forensic experts’ opinions, including those related to accident analysis.
To date, many publications have come out that raise these problems. Apart from the works mentioned above, various methods of taking the uncertainty of data into account have been considered. The use of the total differential method has been discussed e.g. in [25]. In [2], the finitedifference method has been used to estimate the uncertainty. Numerous publications have dealt with the use of the MonteCarlo method [1, 9, 10, 12, 17, 26, 27]. The technique where elements of the DoE (Design of Experiments) theory are used is also employed [6]. A probabilistic approach to uncertainty may be found in [11], where the uncertainty is defined as conditional probability. The publications [3, 13] cover the issue of measurement uncertainty at the reconstruction of motor vehicle collisions. The estimation of uncertainty with employing “interval arithmetics” and the technique where elements of the DoE theory are used is also considered in the literature [28]. In [18], the point estimation method has been presented as a probabilistic tool for determining the uncertain parameters of a vehicle collision. The issues concerning the uncertainty of accident reconstruction calculations have also been indirectly touched upon in [21], where the sensitivity of the calculated values of the vehicle velocity change (ΔV) to vehicle and impact parameters is discussed, or in [23], where the coherence of data recorded in the accident database is analysed. A reference to this problem has also been made in [4], where a method has been presented that makes it possible to reduce the uncertainty of the estimated velocity of a pedestrian crossing the road.
The above shows that there are many methods of determining the uncertainty of calculations. Hence, a question arises about the comparability of results of such calculations. A discussion about this matter has already been presented in [19]. In this study, the authors return to this issue, with increasing the number of the methods considered. With reference to the schematic diagram shown in Fig. 1 herein:

seven useful methods of transforming the uncertainty of input data into the uncertainty of calculation results have been presented, together with formal descriptions;

an example of their use has been demonstrated, with comparing the uncertainties obtained by different methods.
Computing Methods in the Analysis of Uncertainty of Accident Reconstruction
Theoretical Foundations of the Seven Methods
First, let us assume that an adequate data set and a tool (mathematical model) making it possible to calculate the quantities to be found is available. To generalize, let us adopt a matrix notation as a more convenient form, with treating the set of input data as a data vector and the set of calculation results as a result vector:
where x = [x_{1}, x_{2}, …, x_{m}]^{T}—data vector, known; y = [y_{1}, y_{2}, …, y_{n}]^{T}—result vector, to be found; f = [f_{1}, f_{2}, …, f_{n}]^{T}—functional vector, describing the relation between x and y (a mathematical model).
Let us assume that the uncertainties of the input data are also known:
Δx = [Δx_{1}, Δx_{2}, …, Δx_{m}]^{T}—vector of uncertainty of the estimation of vector components.
The following vectors are to be found:
y = [y_{1}, y_{2}, …, y_{n}]^{T} and Δy = [Δy_{1}, Δy_{2}, …, Δy_{n}]^{T}; the latter is the vector of absolute uncertainty of the estimation of vector y components.
When the absolute uncertainty is normalized in relation to the nominal value, a relative uncertainty is obtained:
In the measurement uncertainty theory, two basic approaches are discerned, where the uncertainty is determined with using:

a deterministic model, also referred to as “interval model”, where the notion of probability is not involved and the uncertainty value (Δy_{i}, i = 1, …, n) having been determined is the uncertainty bound (maximum);

a probabilistic (or statistical) model, where the result (y_{i}, i = 1, …, n) is intrinsically a random variable and its uncertainty is measured by the dispersion of its distribution; in most cases, the parameters used as measures are standard deviation (“standard uncertainty”) or its multiple (“expanded uncertainty”).
In four subitems below, deterministic methods will be presented, i.e. upper and lower bounds method (or extreme values method—EVM), firstorder and secondorder total differential method (TDM and TDM2, respectively), and finitedifference method (FDM); three probabilistic methods, i.e. Gauss method (PrM), method based on the description of stochastic processes (PrStM), and MonteCarlo method (MCM), will be described in the next subitems.
Description of the Seven Methods
Upper and Lower Bounds Method (EVM)
In the upper and lower bounds method (or extreme values method), an assumption is made that the value of the quantity to be found, i.e. the value of a component of vector y, lies between the minimum and maximum values obtained by substitution of the minimum and maximum values of vector x components.
where x_{min} = [x_{1min}, x_{2min}, …, x_{mmin}]^{T}, x_{max} = [x_{1max}, x_{2max}, …, x_{mmax}]^{T} (e.g.: x_{jmin} = x_{j} − Δx_{j}, x_{jmax} = x_{j} + Δx_{j}, j = 1, …, m).
A measure of the uncertainty of the quantity y to be found is the difference:
A graphic interpretation of the uncertainty determined by means of the extreme values method has been shown in Fig. 2, based on an example with a function of a single variable.
An important assumption made in this method is the requirement of monotonicity of function y_{i}= f_{i}(x_{j}) on the interval of vector x component values under analysis (this is a prerequisite for the truth of the statement about the extreme values of vector y components at the ends of the intervals defined by the x_{min/max} values). Depending on the monotonicity type, y_{imin/max} will be treated as a function of x_{jmin} or x_{jmax}:
If the function y_{i}= f_{i}(x_{j}) is not monotonic on the intervals defined by the x_{min/max} values, local extremums must be identified for the y_{min}/y_{max} extreme values to be determined.
Total Differential Method (TDM)
Here, the nominal values of vector x components (x_{(0)}= [x_{1(0)}, x_{2(0)}, …, x_{m(0)}]^{T}) and the Δx uncertainty values (Δx = [Δx_{1}, Δx_{2}, …, Δx_{m}]^{T}) are known. The y = [y_{1}, y_{2}, …, y_{n}]^{T} values to be found are directly defined by Eq. (1) for the set of nominal x_{(0)} values.
In the total differential method, the uncertainty of determining vector y components can be found by using the notion of firstorder sensitivity coefficient and the total differential:
In the matrix notation, this may be written as follows:
A graphic interpretation of the uncertainty determined by means of the total differential method has been illustrated in Fig. 3. It should be noted that in this method, the uncertainty is determined by linearization of function f_{i}(x_{1}, …, x_{m}), i = 1, …, n.
The uncertainty vector Δy = [Δy_{1}, Δy_{2}, …, Δy_{n}]^{T} defines the maximum values of errors in estimating vector y components, i.e. the uncertainty bound. For linear models y_{i}= f_{i}(x_{j}), this method becomes identical with the extreme values method.
This method is convenient, but it only produces good results when relations f_{i}(x_{j}) are characterized by relatively small changes in the sensitivity coefficient W_{ij} in the interval x_{j}±Δ x_{j} under interest. Its basic good point is the fact that it directly includes elements of sensitivity analysis, which makes it possible to identify the parameters whose impact on calculation results is more or less considerable.
One of the weak points of determining the uncertainty with the use of formulas (7) or (8) may be the unreasonably “extended” uncertainty range, hindering its practical use in estimating the uncertainty (this will be demonstrated in a calculation example; however, the same may be said about the EVM). This applies in particular to the situations where many data x_{j} are burdened with uncertainty and the “effects” of individual uncertainties (formulas (7) or (8)) are summed up due to the nature of the method. As mentioned previously, this method determines the uncertainty bound if an assumption is made that the situation where all the data take the values at the ends of their intervals can occur with a probability identical to that of any other situation. In practice, such a case is hardly realistic. Therefore, to determine the uncertainty by this method, a procedure is sometimes run that is similar to that adopted for complex measurement uncertainties and a statistical model. In such a case, the uncertainty is assumed as a vector sum of uncertainty components and this is a “combined standard uncertainty” determined in accordance with the “law of propagation of uncertainty” (also referred to as “uncertainty propagation rule”) [8, 16]:
Sometimes, the uncertainty thus determined is called “mean square uncertainty”, e.g. in [26]. To differentiate, the uncertainty defined by (7) or (8) will be denoted here by TDM_{M} (“maximum uncertainty” or “uncertainty bound”) while that defined by (9) will be denoted by TDM_{S} (“mean square uncertainty”).
HigherOrder Total Differential Method (TDM2)
In the classic total differential method described above, the function y = f(x) is linearized. In the case of nonlinear relations, when considerable changes in the sensitivity coefficient W_{ij} occur in the interval x_{j}±Δ x_{j} under interest (at a significant nonlinearity), the uncertainty determined will be burdened with an error (cf. Figs. 2 and 3).
Formulas (7) and (8) may be derived by expanding the function y = f(x) into a Taylor series:
Hence, the following will be obtained:
If only the term with the firstorder derivative is taken into account then, after absolute values are introduced to make individual equation terms independent of the sign of the derivative values, a relation described by formula (7) will be obtained. If the terms with the secondorder derivatives are also taken into account then an equation defining the uncertainty by the secondorder total differential method TDM2 will be formulated:
where \(W_{ijk}^{(2)} = \left. {\frac{{\partial^{2} y_{i} }}{{\partial x_{j} \partial x_{k} }}} \right_{{x_{j} = x_{j(0)} ,x_{k} = x_{k(0)} }}\), i = 1, …, n and j, k = 1, …, m.
Coefficients \(W_{ijk}^{(2)}\) are coefficients of the secondorder sensitivity of the ith quantity to the jth and kth parameter. In qualitative terms, the difference between the TDM and TDM2 methods has been illustrated in Fig. 4. For linear models y_{i}= f_{i}(x_{j}), this method becomes identical with the extreme values method and the firstorder total differential method.
Equation (10) may also be used to derive formulas for determining uncertainty with taking into account the higherorder terms. However, this is of limited practical importance in real applications. For functions of multiple variables, the number of partial derivatives (sensitivity coefficients) becomes very big. As an example: two firstorder and three secondorder sensitivity coefficients have to be determined for a function of two variables, while for a function of six variables, the numbers of such coefficients will rise to 6 and 21, respectively (the number of the secondorder coefficients will be equal to the number of 2combination with repetitions on an melement set). It should also be noted that if the uncertainty is determined by such a method with using total differentials of an order higher than 1 (one) then the uncertainty value obtained will always be raised and this will considerably reduce the usefulness of the said method.
FiniteDifference Method (FDM)
The finitedifference method of uncertainty calculation is in practice a simplified version of the total differential method. Here, the partial derivatives do not have to be determined in analytical form. As it is in the TDM case, the uncertainty formula is derived by expanding the function into a Taylor series (see Eq. 10), with the series being confined to firstorder terms only. The partial derivative (sensitivity coefficient) values are estimated with using a difference quotient and replacing the derivative with the ratio of increments:
where δx_{j}—sufficiently small increment of the x_{j} value; δy_{j}—increment of the function value caused by δx_{j}.
The uncertainty formula has a form similar to that of (7):
For linear models y_{i}= f_{i}(x_{j}), this method intrinsically becomes identical with the methods presented previously.
Here, the option of determining the uncertainty as a vector sum of uncertainty components is also used, as it is in the TDM case:
The δx_{j} value is arbitrarily selected (therefore, adequate experience of the person who runs the calculations would be welcome). It should be such that the partial derivative value could be satisfactorily approximated. According to [8], the δx_{j} value should be initially assumed as about 0.01x_{j(0)} and then gradually reduced, if necessary, until it no longer affects the uncertainty level Δy_{j} obtained.
Gauss Probabilistic Method (PrM)
The uncertainty determination methods described above are categorized as deterministic. In such an approach, any combination of values x_{j} falling into intervals x_{j(0)} ±Δ x_{j}, j = 1, …, m is considered as equally probable. In consequence, the uncertainty of calculations may be overestimated. To take into account the fact that some variants of such combinations (e.g. a situation that all the x_{j} values would be at the ends of intervals x_{j(0)}±Δ x_{j}) may occur with a low probability, the probabilistic nature of the quantities under analysis should be regarded.
In the probabilistic methods, an assumption is made that the components of vector x: x_{j}, j = 1, …, m are random variables with known probability distributions. In consequence, the components of vector y: y_{i}, i = 1, …, n defined by a functional relation y = f(x) are also random variables and the probability distribution of vector x determines the distribution of vector y. However, the analytical determination of the latter when the numbers of components of vectors x and y exceed 2 and the functional vector f is nonlinear is a complicated problem, solvable in some specific cases only. In the applications under consideration, therefore, it is justified to use a simplified method, which may be found in the literature items dealing with measurement uncertainty, including [16], or analyses of accident situations, such as [7] or [8], in the calculus of errors, such a method is referred to as “Gauss method” or just “statistical method”.
The said method is based on the following assumption: if the quantity to be found is a function of vector x: y = f(x) and the components of vector x: x_{j}, j = 1, …, m are described as independent random variables with normal probability distribution \(N_{xj} (\bar{x}_{j} ,\sigma_{xj} )\), where \(\bar{x}\) is the mean value and \(\sigma_{x}\) is the standard deviation, then y_{i}, i = 1, …, n is a random variable with normal probability distribution \(N_{yi} (\bar{y}_{i} ,\sigma_{yi} )\) and the mean value \(\bar{y}_{i}\) is a function of the mean values of vector x components:
The standard deviation \(\sigma_{yi}\) may be expressed by the following formula (identical with the formula of combined standard uncertainty [16]:
The uncertainty of the quantity to be found may be determined for any confidence level.
Method Based on the Description of stochastic processes (PrStM)
This method is a generalization of the PrM method. It may be employed when the mathematical model is explicitly dependent on time. In general terms, such a model is a system of differential equations having the following general form:
where y = [y_{1}, y_{2}, y_{3}, …, y_{n}]^{T}—vector of state coordinates; F = [f_{1}, f_{2}, f_{3}, …, f_{n}]^{T}—functional vector.
When stochastic processes are introduced to the model, Eq. (18) may take a general form:
where \({\mathbf{G}} = \left[ {\begin{array}{*{20}c} {g_{11} } & \cdots & {g_{1m} } \\ \vdots & \ddots & \vdots \\ {g_{n1} } & \cdots & {g_{nm} } \\ \end{array} } \right]\), g_{ij}= g_{i}(y_{j},t) and X_{t}= [X_{t1}, X_{t2}, X_{t3}, …, X_{tm}]^{T}—vector of an mdimensional stochastic process.
The equation solving methods depend on the equation form and the nature of the stochastic processes. A good point of the approach presented is the fact that the results are obtained in the form of complete probabilistic characteristics of the parameters sought, determined for any instant that may be freely chosen. On the other hand, the difficulty of obtaining an analytical solution makes a serious limitation; significant simplifications (linearization methods, simplifications of the nature of the stochastic processes) are often indispensable even for models that are not very complicated. A necessity also arises to determine characteristics of the stochastic process. In the case of processes compatible with the correlation theory of stochastic processes, the function describing the expected value and the correlation function should be known, while the latter is generally very difficult to be determined. Therefore, the applications of this method to the problems under consideration are very restricted (nevertheless, an example application will be presented in Sect. 4).
MonteCarlo Method (MCM)
The MonteCarlo technique is now one of the most powerful computing tools used in analyses of the phenomena and processes that cannot be described by analytical models due to their complexity. It works very well especially in the computational problems where random phenomena should be taken into account. In general terms, its essence lies in repeating an experiment many times with test parameter values being changed at random within a range defined by the specific type of the experiment and the phenomenon examined. Due to the iterative nature of this technique, it is counted among simulation methods. For this reason, the term “MonteCarlo simulation” can often be found in the literature (see e.g. [8, 9, 26]).
For the issues in question, this method makes it possible to find the probability distributions sought, with using a model predetermined as a function y = f(x), representing the phenomenon under analysis. The components of vector x: x_{j}, j = 1, …, m are assumed to be random variables with known characteristics (determined theoretically or empirically).
The random variables y_{i}, i = 1, …, n are determined by multiple numerical calculations made according to the predetermined relation y = f(x) for computergenerated pseudorandom numbers x_{j} in accordance with appropriate distributions of the specific quantities. This method may also be employed when simulation models are used. With this objective in view, multiple simulations are carried out for randomly generated values of individual model parameters. The possible range of solutions y_{i} is obtained on the grounds of pseudorandom statistical distributions of variables y_{i}, generated as described above. The uncertainty measures are the measures of dispersion of the statistical pseudodistributions of y_{i}, thus obtained.
This method makes it possible to avoid the difficulties mentioned in subitems 3.2.5 and 3.2.6. A considerable impact on the correctness of the results obtained is exerted by the quality of the pseudorandomnumber generators (measured by the finite quantity of numbers in the generator cycle). Noteworthy is also the fact that, in a degenerated form i.e. in calculations carried out only for the extreme values of x_{j} distributions and at an assumption of monotonicity of y_{i}= f(x_{j}), this method is equivalent to the extreme values method (EVM).
Example Application of the Methods
Calculation of the Uncertainty of Estimation of the Vehicle Stopping Distance
The calculations will be made for one of the standard problems in accident situation analyses, i.e. for the vehicle braking process. This example has also other good points: it may be described by a simple analytical and, simultaneously, good mathematical model. On the other hand, the parameters of this model describe all the components of the manvehicleroad system and their values are taken, in a significant part, from literature knowledge (they are burdened with statistical uncertainty).
The work with the mathematical model is started from a simplified time history of the process of vehicle braking on an even horizontal road, as shown in Fig. 5. Assuming additionally that the vehicle is braked with the tyreroad adhesion forces being fully utilized, we may state that the maximum braking deceleration value a_{hm} is:
where μ [–]—tyreroad adhesion coefficient (peak or sliding); g ≅ 9.81 m/s^{2}—acceleration of gravity.
If the initial braking speed V_{0} (m/s) (the vehicle speed at the instant t_{0}= 0) and the t_{r}, t_{n}, and a_{hm} values (see Fig. 5) are known then the stopping distance may be expressed by a simplified formula:
Thus, a functional relation y = f(x) has been obtained, where x = [x_{1}, x_{2}, x_{3}, x_{4}]^{T}≡ [V_{0}, μ, t_{r}, t_{n}]^{T} and y = [y_{1}] ≡ [S_{z}]; f = [f_{1}], f_{1} = x_{1}·(x_{3}+ x_{4}/2)+ x ^{2}_{1} /(2gx_{2}) (with an assumption adopted that the g value is certain). The calculations are made to determine the stopping distance y = [y_{1}] ≡ [S_{z}] and the uncertainty of determining its value Δy = [Δy_{1}] ≡ [ΔS_{z}], with an assumption adopted that the uncertainty values Δx = [Δx_{1}, Δx_{2}, Δx_{3}, Δx_{4}]^{T}≡ [ΔV_{0}, Δa_{h}, Δt_{r}, Δt_{n}]^{T} are known.
The uncertainty will be calculated with using the 7 methods described previously, i.e. EVM, TDM, TDM2, FDM, PrM, PrStM, and MCM for a common data set. The data set adopted has been given in Table 1. It represents typical road conditions, described below. The initial braking speed has been assumed as equal to the speed limit applicable to builtup areas, with a 10% tolerance (as an allowance for e.g. accuracy of speedometer readings and driver’s errors in taking the readings). The tyreroad adhesion coefficient value μ assumed corresponds to dry asphalt road surface; in this case, the uncertainty has been assumed as being quite low—see the data given in the literature dealing with the mechanics of motor vehicle motion and accident reconstruction, e.g. [7, 20]. As regards the total system response time and the braking deceleration rise time, the data have been adopted in a similar way and the parameter values and their uncertainties are at a realistic level.
In three methods (TDM, TDM2, PrM), appropriate partial derivatives (sensitivity coefficients) must be determined. For the mathematical model described by Eq. (21), they will have the form as given in Table 2.
Upper and Lower Bounds Method (EVM)
According to Eq. (3), the extreme values may be determined from the following formulas (thanks to the simple form of function S_{z}= f(V_{0}, μ, t_{0}, t_{n}), its monotonicity is known):
where in terms of symbols: x_{jmin}= x_{j(0)} −Δ x_{j}, x_{jmax}= x_{j(0)}+Δ x_{j}.
For the comparability with the other methods to be maintained, the uncertainty has been assumed as a half of the difference between S_{zmax} and S_{zmin}:
The relative uncertainty is the ratio of (23) to the arithmetic average of S_{zmax} and S_{zmin}:
Total Differential Method (TDM)
Here, the following relations hold:
Nominal value:
Maximum uncertainty (TDM_{M}):
Mean square uncertainty (TDM_{S}):
SecondOrder Total Differential Method (TDM2)
The nominal value is defined by formula (25). Based on (11), the uncertainty is described by the following equation:
FiniteDifference Method (FDM)
The nominal value is defined by formula (25). Based on (14) and (15), the uncertainty is described by the following equations:
Maximum uncertainty (FDM_{M}):
Mean square uncertainty (FDM_{S}):
and
(the other parameters x_{k}, k = 1, …, 4 and k ≠ j take nominal values x_{k(0)}).
The values of increments δx_{j}, j = 1, …, 4 have been assumed as recommended in [8], i.e. δx_{j}= 0.01x_{j(0)}.
Gauss Method (PrM)
The mean value is as defined by formula (21):
where symbol “¯” indicates the average value of the distribution of the specific parameter, i.e. \(\overline{{V_{0} }} = V_{0(0)} , \, \overline{\mu } = \mu_{(0)} , \, \overline{{t_{r} }} = t_{r(0)} ,\overline{{ \, t_{n} }} = t_{n(0)}\).
Based on (17), the standard deviation of random variable S_{z} is:
The standard deviations of random variables V_{0}, μ, t_{r}, and t_{n} have been assumed as 1/3 of the uncertainties of these parameters, i.e. \(\sigma_{{V_{0} }} = \Delta V_{0} /3,\sigma_{\mu } = \Delta \mu /3,\sigma_{{t_{r} }} = \Delta t_{r} /3,\sigma_{{t_{n} }} = \Delta t_{n} /3\).
The absolute uncertainty and relative uncertainty are as follows (see also [16]):
Method Based on the Description of Stochastic Processes (PrStM)
The time history of the vehicle braking deceleration has been assumed as having a form similar to that adopted previously (see Fig. 6). Three characteristic phases have been discerned, taking place in the time intervals denoted by t_{r}, t_{n}, and t_{a}, where t_{a} represents the time of braking with the braking force being fully developed. It has been assumed that in the third phase, the braking deceleration is a sum of a defined function of time f(t) (a “trend”) and a stochastic process X_{a}(t):
Moreover, it has been assumed that:

X_{a}(t)—stationary (in the broad sense) normal stochastic process with mean value of m_{Xa}, variance of \(v_{{X_{a} }} = \sigma_{{X_{a} }}^{2}\), and known correlation function K_{Xa}(τ);

trend is a function having the following form:
where A and B—coefficients; in general, they are random variables with normal distribution; A: N(m_{A}, \(\sigma_{A}^{2}\)), B: N(m_{B}, \(\sigma_{B}^{2}\)); m—mean value and σ—standard deviation.
The following initial conditions apply to Eq. (36):
where in general, V_{p} and S_{p} are random variables with normal distribution; V_{p}: \(N(m_{{V_{p} }} , \, \sigma_{{V_{p} }}^{2} )\), S_{p}: \(N(m_{{S_{p} }} , \, \sigma_{{S_{p} }}^{2} )\).
If the system response phase (t_{r}) and deceleration rise phase (t_{n}) are taken into account and the inequality 0 ≤ t′ < t_{r}+ t_{n} holds, then V_{p} and S_{p} become dependent random variables.
A complete description of the solution shown above may be found in [14]. Without going into detail, the solutions obtained in this case may be proven to be normal random processes. Figure 7 shows time histories of the solutions in the form of mean values of the distance travelled (S), vehicle speed (V), and longitudinal vehicle acceleration (a) and the corresponding time histories of standard deviations σ_{S}, σ_{V}, and σ_{a}. These curves have been obtained for the parameter values corresponding to the data given in Table 1:

m_{A}= 0 m/s^{3}, σ_{A}= 0.0 m/s^{3}; m_{B}= − 6.83 m/s^{2}, σ_{B}= 0.164 m/s^{2};

\(m_{{V_{p} }}\)= 12.9 m/s (46.3 km/h), \(\, \sigma_{{{\text{V}}_{\text{p}} }}\) = 0.48 m/s (1.72 km/h); \(m_{{S_{0} }}\)= 22.1 m, \(\, \sigma_{{{\text{S}}_{ 0} }}\) = 1.63 m;

coefficient of correlation of random variables S_{p} and V_{p}: k_{VS} = 0.276; the values describing random variables S_{p} and V_{p} have been determined with using an analytical model of braking in rectilinear motion for the period t_{r}+ t_{n} (see Figs. 5 and 6a) and with employing the Gauss method.
Parameters of random process X_{a}(t):

m_{Xa}= 0 m/s^{2};

correlation function form: \(K_{{X_{a} }} (\tau ) = v \cdot e^{  u \cdot \left \tau \right} \cos \omega \tau\)
where τ—time difference; v = 0.01 m^{2}/s^{4}; u = 4.4 s^{−1}, ω = 20 s^{−1} (the form of the function and coefficient values have been selected on the grounds of experimental test results [14]).
MonteCarlo Method (MCM)
For these calculations, a special computer program MCM has been written, where the function described by formula (21) has also been implemented. The model parameters values V_{0}, μ (or a_{hm}), t_{r}, and t_{n} may be generated as random (pseudorandom) numbers whose distributions would be programmed as functions (e.g. normal, exponential, or uniform) or be empirically based. The data taken for calculations corresponded to those specified in Table 1. The histograms representing the distribution of stopping distance S_{z} have been shown in Figs. 8 and 9. Figure 8 describes the situation where all the data (V_{0}, t_{r}, t_{n}, μ) were treated as random variables with truncated normal distribution (the numbers generated could not differ from the mean by more than treble standard deviation). The situation with these data being treated as random variables with uniform (rectangular) distribution has been illustrated in Fig. 9. As it can be seen, the results in both cases resemble in their shape the curve representing a truncated normal distribution, but with different standard deviation values (higher in the latter case). Actually, however, none of the distributions can be considered a truncated normal one. In both cases, it can be seen that the distribution curve is slightly asymmetric, with the mode being shifted towards the lower S_{z} values. This is due to a nonlinearity of the relation represented by Eq. (20).
Summary of the Results
The calculation results have been summarized in Table 3. They will be discussed in Sect. 4.2.
Comparison Between the Seven Methods Used
To facilitate the comparison between the uncertainties estimated with using different methods, the calculation results specified in Table 3 have been presented graphically in Fig. 10 in the form of stopping distance ranges. The following conclusions may be drawn from the results presented:

The ranges of the solutions obtained with using the deterministic methods where the maximum uncertainty is estimated (EVM, TDM_{M}, TDM2, FDM_{M}) do not considerably differ from each other. The highest value has been obtained for the TDM2 method, where the uncertainty is bigger by about 6% than the uncertainty calculated for the TDM method.

The results calculated with using the probabilistic methods PrM, PrStM significantly differ from those obtained from the deterministic methods. The stopping distance ranges are much narrower, which is advantageous from the point of view of usefulness in accident reconstruction.

A similar effect may be obtained by using deterministic methods and calculating the mean square uncertainty (TDM_{S}, FDM_{S}).

The ranges determined by the Gauss probabilistic method (PrM) and the probabilistic method based on the description of stochastic processes (PrStM) are close to each other. This means that in the case under consideration and in similar problems, the PrM method, being relatively simple in comparison with the PrStM, will be sufficient for determining the probability distribution of the quantity sought.

The ranges determined by the MonteCarlo probabilistic method (MCM) depend on types of the data probability distribution. In general, they are wider than those obtained from the other probabilistic methods (PrM, PrStM). When the input data are treated as random variables with uniform distribution (MCMu), the range calculated is close to that determined with using the deterministic methods EVM, TDM_{M}, and FDM_{M}. For the data being treated as having normal distribution (MCMn), a narrower range has been obtained, which may be interpreted as an effect of coming closer to the PrM and PrStM methods. Hence, a hypothesis may be formulated that he MCM method is a compromise between the deterministic methods TDM and EVM and the probabilistic methods PrM and PrStM in terms of both their applicability and the reliability of the results obtained. The MCM method may also be considered a good reference for verifying the results obtained with using other methods.
Based on the results obtained in the calculation example, a statement may be made that the introduction of data uncertainty to the calculations causes big differences between the results of such calculations and the “nominal” results (i.e. the results obtained without taking the data uncertainty into account). Such an effect can be observed for each of the methods used to estimate the impact of the said inaccuracies.
In consideration of the above and the fact that the data uncertainty values taken for the calculations were not too high, the following general conclusion may be drawn: a failure to take the data uncertainty into account may result in the construction of an untrue hypothesis about the course of a specific accident situation and the wrong hypothesis may translate into unfair legal consequences to be borne by the participants in such a situation. As an example: if, say, the minimum safe value of the distance between the vehicle and the obstacle at the initial instant were 40 m then, without taking the uncertainty into account, a judgment might be formulated that the driver should manage to stop the vehicle and the collision would not take place. The taking of the uncertainty into account, regardless of the method of determining it, would cause such a statement to be unprovable.
It is difficult to show unambiguously which of the uncertainty determination methods should be considered the best. The selection depends to a considerable extent on the model (simulation or analytical) adopted to analyse the phenomenon observed and on the determinability of the input data (e.g. parameters of the random data distribution). To select the method, the limitations of each of them described in Sect. 3 and the above conclusions drawn from results of the example application of the methods should be taken into consideration.
Conclusion
The calculations carried out at the accident reconstruction are burdened with uncertainty. A failure to take the uncertainty into account in the calculations may considerably affect the expert’s opinion about the course of the incident under analysis. Correct determination of the uncertainty of calculation results and, then, of the opinion as a whole will improve the reliability of the opinion.
In this study, the problems related to determining the uncertainty of calculation results have been discussed. For the tools having the form of mathematical models of vehicle dynamics, a set of methods have been presented that make it possible to determine the uncertainty of calculation results stemming from the uncertainty of the data taken as an input. The uncertainty determining methods available, known in great measure from the area of uncertainty in metrology, are characterized by very different degrees of complexity and by their usefulness to the computing tools used. The example calculations made for the models used at accident reconstruction have shown that each of the methods may produce results differing, even significantly, from each other. It seems reasonable that only the methods should be used that, apart from being applicable to the specific tool employed for the analysis, would enable the obtaining of the lowest uncertainty of the expert’s opinion. Unfortunately, they are usually of the probabilistic type. For such methods to be employed, at least the statistical distributions of the quantities taken as the input data must be known. Such a requirement is often difficult to be met in the case of the parameters used for calculations related to accident reconstruction. There is a need to determine the distributions of this kind because of the lack of such data even for the most fundamental parameters used by forensic experts, such as e.g. driver reaction time.
References
Ball JK, Danaher DA, Ziernicki RM (2007) Considerations for applying and interpreting Monte Carlo simulation analyses in accident reconstruction. SAE Technical Paper 2007010741. https://doi.org/10.4271/2007010741
Bartlett W, Fonda A (2003) Evaluating uncertainty in accident reconstruction with finite differences. SAE Technical Paper 2003010489. https://doi.org/10.4271/2003010489
Bartlett W, Wright W, Masory O, Brach R et al (2002) Evaluating the uncertainty in various measurement tasks common to accident reconstruction. SAE Technical Paper 2002010546. https://doi.org/10.4271/2002010546
Bastien C, Wellings R, Burnett B (2018) An evidence based method to calculate pedestrian crossing speeds in vehicle collisions (PCSC). Accid Anal Prev 118(2018):66–76. https://doi.org/10.1016/j.aap.2018.05.020
Brach Raymond M (1994) Uncertainty in accident reconstruction calculation. SAE Technical Paper 940722. doi:10.4271/940722
Brach RM (2007) Design of experiments and parametric sensitivity of planar impact mechanics. Proceedings of the 16th annual congress of the European Association for accident research and analysis (EVU). Institute of Forensic Research Publishers, Cracow 2007, pp 9–21
Brach RM, Brach M (2005) Vehicle accident analysis and reconstruction methods. SAE International, Warrendale
Brach RM, Dunn PF (2009) Uncertainty analysis for forensic science. Lawyers and Judges Publishing Company Inc, Tucson
Brach RM, Brach Raymond M, Louderback A (2012) Uncertainty of CRASH3 ΔV and energy loss for frontal collisions. SAE Technical Paper 2012010608. https://doi.org/10.4271/2012010608
Daily J (2009) Monte Carlo techniques for correlated variables in crash reconstruction. SAE Technical Paper 2009010104. https://doi.org/10.4271/2009010104
Davis GA (2015) A comparison of bayesian speed estimates from rollover and critical speed methods. SAE Technical Paper 20150414. https://doi.org/10.4271/2015011434
Fleck G, Daily J (2010) Sensitivity of Monte Carlo modeling in crash reconstruction. SAE Technical Paper 2010010071. https://doi.org/10.4271/2010010071
Fonda AG (2004) The effects of measurement uncertainty on the reconstruction of various vehicular collisions. SAE Technical Paper 2004011220. https://doi.org/10.4271/2004011220
Guzek M (2000) Analiza prostoliniowego hamowania samochodu jako procesu stochastycznego (Analysis of rectilinear motor vehicle braking as a stochastic process). Zeszyty Naukowe Politechniki Świętokrzyskiej. Mechanika 71:147–156 (in Polish)
Hansson SO (1994) Decision theory. A brief introduction. Royal Institute of Technology (KTH), Stockholm
JCGM 100: guide to expression of uncertainty in measurement (1993) ISO, Geneva. http://www.bipm.org/en/publications/guides/gum.html. Accessed 7 Oct 2019
Kost G, Werner S (1994) Use of Monte Carlo simulation techniques in accident reconstruction. SAE Technical Paper 940719. doi:10.4271/940719
Liu Q, Liu J, Wu X, Han X, Cao L, Guan F (2019) An inverse reconstruction approach considering uncertainty and correlation for vehiclevehicle collision accidents. Struct Multidiscip Optim 60(2):681–698. https://doi.org/10.1007/s00158019022319
Lozia Z, Guzek M (2005) Uncertainty study of road accident reconstruction—computational methods. SAE TP 2005 011195. A separate booklet (also published in SAE Special Publication SP1930 “Accident Reconstruction 2005”, pp 163–178 and SAE Transactions 2005 “Journal of passenger cars: mechanical systems 2005”. Vol 114, No 6, pp 1342–1356, as well as a subchapter in M. S. Varat: Crash Reconstruction Research: 20 Years of Progress (1988–2007)” SAE International 2008, pp 615–629). https://doi.org/10.4271/2005011195
Mitschke M, Wallentowitz H (2004) Dynamik der Kraftfahrzeuge. Springer, Berlin
Pride R, Giddings D, Richens D, McNally DS (2013) The sensitivity of the calculation of ΔV to vehicle and impact parameters. Accid Anal Prev 55C:144–153. https://doi.org/10.1016/j.aap.2013.03.002
Tubergen G (1995) The technique of uncertainty analysis as applied to the momentum equation for accident reconstruction. SAE Technical Paper 950135. doi:10.4271/950135
Vangi D, Gulino M, Cialdai C (2019) Coherence assessment of accident database kinematic data. Accid Anal Prev 123:356–364. https://doi.org/10.1016/j.aap.2018.12.004
Wach W (2013) Structural reliability of road accidents reconstruction. Forensic Sci Int 228(1):83–93. https://doi.org/10.1016/j.forsciint.2013.02.026
Wach W (2013) Uncertainty in calculations using Lambourn’s critical speed procedure. SAE Technical Paper 2013010779. https://doi.org/10.4271/2013010779
Wach W, Unarski J (2007) Uncertainty analysis of the preimpact phase of a pedestrian collision. SAE Technical Paper 2007010715. https://doi.org/10.4271/2007010715
Wood D, O’Riordain S (1994) Monte Carlo simulation methods applied to accident reconstruction and avoidance analysis. SAE Technical Paper 940720. doi:10.4271/940720
Zou Tiefang Yu, Zhi CM, Jike L (2010) Two nonprobabilistic methods for uncertainty analysis in accident reconstruction. Forensic Sci Int 198:134–137. https://doi.org/10.1016/j.forsciint.2010.02.006
Author information
Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Guzek, M., Lozia, Z. Computing Methods in the Analysis of Road Accident Reconstruction Uncertainty. Arch Computat Methods Eng 28, 2459–2476 (2021). https://doi.org/10.1007/s1183102009462w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1183102009462w