Skip to main content

Computing Methods in the Analysis of Road Accident Reconstruction Uncertainty

Abstract

The study is dedicated to the problem of uncertainty in the analysis of accident situations in road traffic. The term “uncertainty” is generally known when used with reference to measurement techniques, but its application to the analyses of accident situations in road traffic, including accident reconstruction, is a relatively new field of knowledge. The objectives of this work include the presentation and examination of selected aspects related to the taking of uncertainty into account when analysing the course of an accident and making the necessary calculations. Apart from the scientific objectives, an important utilitarian goal may also be pointed out. The data and methods presented may be used by automotive technology experts in their accident reconstruction work. The paper shows seven methods that enable the taking into account of the uncertainty of the data used for calculations, i.e. extreme values method, total differential method, higher-order total differential method, finite-difference method, Gauss method, method based on the description of stochastic processes, and Monte-Carlo method. Apart from formal (mathematical) descriptions of the methods, an example of their use for the estimation of uncertainty of selected quantities that describe an accident situation has been demonstrated. The bad and good points of individual methods have been shown in the context of the application considered.

Introduction

Purposes of Analysing Road Accidents

When road accidents and collisions are examined, they may be either treated as a mass phenomenon or analysed individually. The examinations are carried out, above all, to get to know the nature of such incidents (whether considered in mass or individual terms) in order to identify their reasons and, afterwards, to take actions aimed at improving the road traffic safety in the future. The analyses of this kind are used by institutions responsible for the shaping of the transport safety system. A separate group of the examinations consists of the investigations carried out to ascertain the accident circumstances that would enable the identification of the perpetrators and those to blame for the accident. In this case, the analyses are chiefly used by law-enforcement authorities (prosecutors, courts, etc.).

One of the elements of the analysis of an accident (collision) that has taken place is the “accident reconstruction”, i.e. an attempt to reconstruct the course of what happened. The reconstruction results may be of crucial importance, especially for the participants in the incident. Such results provide grounds for the law-enforcement authorities to formulate procedural motions as regards accident perpetrators and for the court to make a decision about the guilt and to pass a sentence. It should be stressed here that, intrinsically, the analysis is carried out after the incident has taken place. The forensic expert who prepares the opinion, using his/her knowledge and the trace evidence collected at the incident site (including the results of post-incident measurements), making definite assumptions regarding the values of the parameters that describe the incident, and using the methods available to him/her, carries out a series of operations in the form of calculations and inferences in order to determine the quantities that are important for identifying the accident reasons. Such quantities may describe the pre-incident behaviour of the participants, the motion of the vehicle or vehicles involved, or other important circumstances.

Due to the purpose, the reliability of the expert’s opinion issued is essential. A matter of great importance is competence of the investigators, adequacy of the tools used for the accident reconstruction, and appropriate selection of the parameter values assumed. The uncertainty of the opinion is a somewhat different issue. Intrinsically, only approximate values of most parameters can be assumed. Therefore, a question arises about the accuracy of the parameter values determined in the accident reconstruction process or, in other words, about the uncertainty of determining the values of the quantities that are important in terms of the reconstruction purposes. This is the basic thread of this work.

The Notion of Uncertainty

The term “uncertainty” is used in many fields of science, where its meaning may be different. It is used in the decision theory, which is one of the branches of mathematics and finds application in very different areas, such as statistics, information science, engineering problems (optimization), psychology, sociology, economy (management), or medicine. In general, the uncertainty is defined as a state (situation) where the decisions made may produce various effects, with the probabilities of such effects being unknown [15]. The term “uncertainty” is firmly established in the fields of metrology and measurement techniques. Here, this term may be considered in its broader sense, as a set of general doubts about measurement results. However, it is more often understood “in the narrower meaning”, i.e. as a parameter describing the limits of variation in measurement results. In the document Guide to expression of uncertainty in measurement [16], the notion of uncertainty is defined as parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand.

In the formulation of expert’s opinions about road accidents, the uncertainty of calculation results will always be involved. This will be related to both the uncertainty of the data assumed and the uncertainty of the computing tools used. With some simplification, the uncertainty of results of accident reconstruction calculations may be considered as corresponding to the notion of uncertainty of an indirect measurement in measurement technology. It may be assumed that the uncertainty of calculation results obtained during an analysis (reconstruction) of a road accident (or, in more general terms, accident situation) will be a parameter (or a set of parameters) describing the possible dispersion of values of the quantity (or quantities) determined by the calculations.

In terms of usefulness, the accident reconstruction uncertainty is often associated with the reconstruction reliability. These two notions are not identical with each other. The uncertainty should be understood as defined above, while the reliability is related to the confidence that the reconstruction result (whether the uncertainty has or has not been determined) is correct. A formal description of determining the reliability has been proposed in [24], where the reliability has been defined, in most simplified terms, as the probability that the reconstruction is true, with using the probabilistic structure of the Bayes network.

Objective and Scope of the Study

Limiting ourselves to the part related to purely computational problems, we may present this in the form of a simple diagram (Fig. 1): the expert has a set of data describing the accident under analysis (Data), runs calculations with using a method that is available or chosen in consideration of the nature of the incident and the actual purpose of the analysis (Tools), and obtains a specific result (Results). As an example: if the problem under analysis is the vehicle braking process and the quantity to be found is the vehicle stopping distance Sz, the set of input data may consist of initial velocity of vehicle motion (V0), braking deceleration (ah), driver reaction and braking system response time (tr), and deceleration rise time (tn). As the computing method, any method may be used that would be suitable for transforming the data set into the vehicle stopping distance Sz to be determined, e.g. the analytical formulas known from the fundamentals of the mechanics of vehicle motion (such as e.g. given by [20]).

Fig. 1
figure 1

Schematic diagram illustrating the main issues of the process of accident reconstruction

From the analysis objective point of view, the most important issue is the final analysis result. It is burdened with a definite uncertainty, stemming from the uncertainty of the input data and from the uncertainty generated by the computing method used. As regards the data, the uncertainty may come from different sources. Some of the data may be taken from measurements carried out at the accident site and this is the case where classic measurement uncertainty, both of random and systematic type, is encountered. Some other data, however, are assumed by the expert who runs the calculations because either appropriate measurements are impracticable for technical, organizational or economic reasons or such data cannot be directly applied. A good example may be here the driver reaction time, because its value may vary within very wide limits depending on many diverse factors describing e.g. the complexity of the traffic situation, passing psychophysical condition of the driver, etc.

As regards the computing methods, the uncertainty arises from the models and other mathematical tools used, which represent the real phenomena only in a simplified way. Even if the true values of individual parameters of the computational model adopted are used, the result obtained is only an approximation of the true value to be found. Simultaneously, the uncertainty resulting from the use of a specific method is not necessarily correlated with the degree of complexity of the model employed. Here, expert’s knowledge and skills are important for the appropriate selection and use of a model that would best suit the problem under analysis, in respect of the uncertainty as well.

In this study, attention will be focused on the first source of uncertainty and the considerations will be dedicated to the methods that would make it possible to take the uncertainty of the input data into account in the calculations.

The Problem of Uncertainty in the Reconstruction of Road Accidents

Road Accident Reconstruction Methods

In most general terms, the said methods may be divided into two categories:

  • those using mathematical models of the man-vehicle-environment system;

  • those using data recorded by “black box” type devices, i.e. Event Data Recorders (EDR).

The former is the basic one. The methods of the other category were unavailable until quite recently. The first automotive EDRs appeared in mid 1990s, but they still have not become widespread equipment of motor vehicles.

The models met at present are characterized by very different degrees of complexity, varying from simple analytical models to sophisticated systems where more or less complicated simulation programs must be used. In the complex simulation programs, multiple partial models most frequently occur, which represent various subsystems or components of the man-vehicle-environment system, constituting together a test environment designed for specific purposes.

Sources of Uncertainty in the Reconstruction of Road Accidents

As shown in the schematic diagram in Fig. 1, the accident reconstruction calculations are carried out for a certain set of data. In the case of mathematical models being used, the input data are assumed by the expert; if EDR data are available, then the values recorded are often used as the input. Two basic sources of uncertainty of the input data may be distinguished:

  • measurement uncertainty of the quantities measured;

  • uncertainty of the parameter values assumed (referred to as statistical uncertainty).

For data measurement results, the uncertainty sources may be all the factors that are characteristic for the specific measurement techniques (see e.g. [16]), i.e.: incomplete definition of the measurand, uncertainty related to the carrying out of the measurement (including errors of the method and measuring system used, non-representativeness, errors caused by environmental impact, reading errors, approximation and simplification errors), or uncertainty of measuring instruments.

However, the carrying out of full-scope measurements on all the objects involved, whether at the incident site or anywhere else, is hardly possible or actually impracticable. This is due to the scale of such a task, because the number of the quantities to be measured (e.g. the number of data to be introduced to simulation programs) may be of the order of several hundred or even more. The measurement of some parameters may be infeasible (as an example, this applies to many parameters that describe the collision process, such as the vehicle body stiffness curve or characteristics of other parts or objects damaged during the collision). The measurement of many other parameters might be possible, but this would require a lot of complicated and costly work (an example might be such inertial parameters of a vehicle as the location of the centre of vehicle mass or the moments of inertia of vehicle body solid or road wheels). Therefore, a significant number of data are assumed by the expert, based on technical documentation of the vehicles involved, simplified models used to estimate the values of the quantities in question, expert’s experience, or specialized literature.

There is also a specific category of parameters that are measurable but actually cannot be measured or can be measured in exceptional situations only. Simultaneously, such parameters are often critical from the point of view of the course of the incident. As regards vehicle motion, two parameter groups should be pointed out here (this has already been mentioned in Sect. 1.3): one of them is related to characteristics of the tangential tyre-road interaction (in simplified terms, the tyre-road adhesion characteristics) and the other one is related to the description of human (vehicle driver or pedestrian’s) behaviour.

Another source of uncertainty is the tool used to transform the set of input data into a set of the analysis results sought. In the case of classic calculations, such a tool is the computing method employed, i.e. the mathematical method of the phenomenon under analysis. This type of uncertainty is referred to as modelling uncertainty. Its estimation is based on the data obtained from validation or experimental verification of the model.

To recapitulate: the uncertainty of the calculation results obtained in an analysis of accident situations is a function of the uncertainty of the input data taken for the calculations (burdened with measurement uncertainty or uncertainty stemming from specific attributes of the data) and the uncertainty of the computing tool. A separate problem is the method of transforming the uncertainty of the input data into the uncertainty of the calculation results to be found, i.e. the method of taking the data uncertainty into account. Depending on the selection of this method (including its applicability to the specific computing method used to analyse the situation), different uncertainty of the calculation results may be obtained at the same uncertainty of the input data.

Review of the Literature Dealing with Uncertainty in Road Accident Analysis and Objective of the Study

The problem of uncertainty in the analysis of road accidents, although encountered from the very outset of accident reconstruction attempts, has actually been addressed in the scientific literature for quite a short time. The first publications where reference is directly made to the issues of uncertainty in the field of analyses of accident situations in road traffic date back to the first half of 1990s. These were American works Uncertainty in Accident Reconstruction Calculation [5] and The Technique of Uncertainty Analysis as Applied to the Momentum Equation for Accident Reconstruction [22]. In both of them, some analytical methods that made it possible to determine the uncertainty of the results obtained and the applications of such methods to simple calculations related to the accident reconstruction (estimation of the stopping distance, estimation of the pre-impact velocities) have been presented. An important item is the publication Uncertainty Analysis for Forensic Science [8], where the authors present fundamentals of the uncertainty calculus (including the probability theory and sensitivity analysis) from the point of view of the applicability of such a calculus to the preparation of forensic experts’ opinions, including those related to accident analysis.

To date, many publications have come out that raise these problems. Apart from the works mentioned above, various methods of taking the uncertainty of data into account have been considered. The use of the total differential method has been discussed e.g. in [25]. In [2], the finite-difference method has been used to estimate the uncertainty. Numerous publications have dealt with the use of the Monte-Carlo method [1, 9, 10, 12, 17, 26, 27]. The technique where elements of the DoE (Design of Experiments) theory are used is also employed [6]. A probabilistic approach to uncertainty may be found in [11], where the uncertainty is defined as conditional probability. The publications [3, 13] cover the issue of measurement uncertainty at the reconstruction of motor vehicle collisions. The estimation of uncertainty with employing “interval arithmetics” and the technique where elements of the DoE theory are used is also considered in the literature [28]. In [18], the point estimation method has been presented as a probabilistic tool for determining the uncertain parameters of a vehicle collision. The issues concerning the uncertainty of accident reconstruction calculations have also been indirectly touched upon in [21], where the sensitivity of the calculated values of the vehicle velocity change (ΔV) to vehicle and impact parameters is discussed, or in [23], where the coherence of data recorded in the accident database is analysed. A reference to this problem has also been made in [4], where a method has been presented that makes it possible to reduce the uncertainty of the estimated velocity of a pedestrian crossing the road.

The above shows that there are many methods of determining the uncertainty of calculations. Hence, a question arises about the comparability of results of such calculations. A discussion about this matter has already been presented in [19]. In this study, the authors return to this issue, with increasing the number of the methods considered. With reference to the schematic diagram shown in Fig. 1 herein:

  • seven useful methods of transforming the uncertainty of input data into the uncertainty of calculation results have been presented, together with formal descriptions;

  • an example of their use has been demonstrated, with comparing the uncertainties obtained by different methods.

Computing Methods in the Analysis of Uncertainty of Accident Reconstruction

Theoretical Foundations of the Seven Methods

First, let us assume that an adequate data set and a tool (mathematical model) making it possible to calculate the quantities to be found is available. To generalize, let us adopt a matrix notation as a more convenient form, with treating the set of input data as a data vector and the set of calculation results as a result vector:

$$\varvec{y} = \varvec{f}(\varvec{x})$$
(1)

where x = [x1, x2, …, xm]T—data vector, known; y = [y1, y2, …, yn]T—result vector, to be found; f = [f1, f2, …, fn]T—functional vector, describing the relation between x and y (a mathematical model).

Let us assume that the uncertainties of the input data are also known:

Δx = [Δx1, Δx2, …, Δxm]T—vector of uncertainty of the estimation of vector components.

The following vectors are to be found:

y = [y1, y2, …, yn]T and Δy = [Δy1, Δy2, …, Δyn]T; the latter is the vector of absolute uncertainty of the estimation of vector y components.

When the absolute uncertainty is normalized in relation to the nominal value, a relative uncertainty is obtained:

$$\varvec{\Delta} x_{rel} = [\varDelta x_{1} /x_{1} ,\varDelta x_{2} /x_{2} , \ldots ,\varDelta x_{m} /x_{m} ]^{T} \;{\text{and}}\;\varDelta y_{rel} = [\varDelta y_{1} /y_{1} ,\varDelta y_{2} /y_{2} , \ldots ,\varDelta y_{n} /y_{n} ]^{T}$$
(2)

In the measurement uncertainty theory, two basic approaches are discerned, where the uncertainty is determined with using:

  • a deterministic model, also referred to as “interval model”, where the notion of probability is not involved and the uncertainty value (Δyi, i = 1, …, n) having been determined is the uncertainty bound (maximum);

  • a probabilistic (or statistical) model, where the result (yi, i = 1, …, n) is intrinsically a random variable and its uncertainty is measured by the dispersion of its distribution; in most cases, the parameters used as measures are standard deviation (“standard uncertainty”) or its multiple (“expanded uncertainty”).

In four sub-items below, deterministic methods will be presented, i.e. upper and lower bounds method (or extreme values method—EVM), first-order and second-order total differential method (TDM and TDM2, respectively), and finite-difference method (FDM); three probabilistic methods, i.e. Gauss method (PrM), method based on the description of stochastic processes (PrStM), and Monte-Carlo method (MCM), will be described in the next sub-items.

Description of the Seven Methods

Upper and Lower Bounds Method (EVM)

In the upper and lower bounds method (or extreme values method), an assumption is made that the value of the quantity to be found, i.e. the value of a component of vector y, lies between the minimum and maximum values obtained by substitution of the minimum and maximum values of vector x components.

$${\mathbf{y}}_{{\min} } /{\mathbf{y}}_{{\max} } = {\mathbf{f}}\left( {{\mathbf{x}}_{{\min} /{\max} } } \right)$$
(3)

where xmin = [x1min, x2min, …, xmmin]T, xmax = [x1max, x2max, …, xmmax]T (e.g.: xjmin = xj − Δxj, xjmax = xj + Δxj, j = 1, …, m).

A measure of the uncertainty of the quantity y to be found is the difference:

$$\varvec{\Delta y} = \left| {\varvec{y}_{{\rm max}} - {\varvec{y}}_{{\rm min}} ,} \right|\;\;{\text{or, better, a half of it, i.e.:}}\;\varvec{\Delta} y = \left| {\varvec{y}_{{\rm max}} - {\varvec{y}}_{{\rm min}} ,} \right|/2$$
(4)

A graphic interpretation of the uncertainty determined by means of the extreme values method has been shown in Fig. 2, based on an example with a function of a single variable.

Fig. 2
figure 2

Illustration of uncertainty in the extreme values method (subscript “0” indicates the nominal value)

An important assumption made in this method is the requirement of monotonicity of function yi= fi(xj) on the interval of vector x component values under analysis (this is a prerequisite for the truth of the statement about the extreme values of vector y components at the ends of the intervals defined by the xmin/max values). Depending on the monotonicity type, yimin/max will be treated as a function of xjmin or xjmax:

$$\frac{{\partial y_{i} }}{{\partial x_{j} }} > 0 \, \Rightarrow y_{i {\rm min}/{\rm max}} = f_{i} \left( {x_{j{\min} /{\max} } } \right)$$
(5)
$$\frac{{\partial y_{i} }}{{\partial x_{j} }} < 0 \Rightarrow y_{i \min/\max} = f_{i} \left( {x_{j{\max} /{\min} } } \right)$$
(6)

If the function yi= fi(xj) is not monotonic on the intervals defined by the xmin/max values, local extremums must be identified for the ymin/ymax extreme values to be determined.

Total Differential Method (TDM)

Here, the nominal values of vector x components (x(0)= [x1(0), x2(0), …, xm(0)]T) and the Δx uncertainty values (Δx = [Δx1, Δx2, …, Δxm]T) are known. The y = [y1, y2, …, yn]T values to be found are directly defined by Eq. (1) for the set of nominal x(0) values.

In the total differential method, the uncertainty of determining vector y components can be found by using the notion of first-order sensitivity coefficient and the total differential:

$$\Delta y_{i} = \sum\limits_{j = 1}^{m} {\left| {W_{ij} \cdot \Delta x_{j} } \right|} \;\;{\text{where}}\;W_{ij} = \left. {\frac{{\partial y_{i} }}{{\partial x_{j} }}} \right|_{{x_{j} = x_{j(0)} }} ,\quad i = 1, \ldots ,n$$
(7)

In the matrix notation, this may be written as follows:

$${\mathbf{\Delta y}} = {\mathbf{W}} \cdot {\mathbf{\Delta x}} = \left[ {\begin{array}{*{20}c} {\left| {W_{11} } \right|} & \cdots & {\left| {W_{1m} } \right|} \\ \vdots & \ddots & \vdots \\ {\left| {W_{n1} } \right|} & \cdots & {\left| {W_{nm} } \right|} \\ \end{array} } \right] \cdot \left[ {\begin{array}{*{20}c} {\Delta x_{1} } \\ \vdots \\ {\Delta x_{m} } \\ \end{array} } \right]$$
(8)

A graphic interpretation of the uncertainty determined by means of the total differential method has been illustrated in Fig. 3. It should be noted that in this method, the uncertainty is determined by linearization of function fi(x1, …, xm), i = 1, …, n.

Fig. 3
figure 3

Illustration of uncertainty in the total differential method

The uncertainty vector Δy = [Δy1, Δy2, …, Δyn]T defines the maximum values of errors in estimating vector y components, i.e. the uncertainty bound. For linear models yi= fi(xj), this method becomes identical with the extreme values method.

This method is convenient, but it only produces good results when relations fi(xj) are characterized by relatively small changes in the sensitivity coefficient Wij in the interval xj±Δ xj under interest. Its basic good point is the fact that it directly includes elements of sensitivity analysis, which makes it possible to identify the parameters whose impact on calculation results is more or less considerable.

One of the weak points of determining the uncertainty with the use of formulas (7) or (8) may be the unreasonably “extended” uncertainty range, hindering its practical use in estimating the uncertainty (this will be demonstrated in a calculation example; however, the same may be said about the EVM). This applies in particular to the situations where many data xj are burdened with uncertainty and the “effects” of individual uncertainties (formulas (7) or (8)) are summed up due to the nature of the method. As mentioned previously, this method determines the uncertainty bound if an assumption is made that the situation where all the data take the values at the ends of their intervals can occur with a probability identical to that of any other situation. In practice, such a case is hardly realistic. Therefore, to determine the uncertainty by this method, a procedure is sometimes run that is similar to that adopted for complex measurement uncertainties and a statistical model. In such a case, the uncertainty is assumed as a vector sum of uncertainty components and this is a “combined standard uncertainty” determined in accordance with the “law of propagation of uncertainty” (also referred to as “uncertainty propagation rule”) [8, 16]:

$$\Delta y_{i} = \sqrt {\sum\limits_{j = 1}^{m} {\left( {W_{ij} \cdot \Delta x_{j} } \right)^{2} } }$$
(9)

Sometimes, the uncertainty thus determined is called “mean square uncertainty”, e.g. in [26]. To differentiate, the uncertainty defined by (7) or (8) will be denoted here by TDMM (“maximum uncertainty” or “uncertainty bound”) while that defined by (9) will be denoted by TDMS (“mean square uncertainty”).

Higher-Order Total Differential Method (TDM2)

In the classic total differential method described above, the function y = f(x) is linearized. In the case of non-linear relations, when considerable changes in the sensitivity coefficient Wij occur in the interval xj±Δ xj under interest (at a significant non-linearity), the uncertainty determined will be burdened with an error (cf. Figs. 2 and 3).

Formulas (7) and (8) may be derived by expanding the function y = f(x) into a Taylor series:

$$\begin{aligned} & f_{i} (x_{1} + \Delta x_{1} ,x_{2} + \Delta x_{2} , \ldots ,x_{m} + \Delta x_{m} ) = f_{i} (x_{1} ,x_{2} , \ldots ,x_{m} ) + \frac{{\partial f_{i} }}{{\partial x_{1} }}\Delta x_{1} + \frac{{\partial f_{i} }}{{\partial x_{2} }}\Delta x_{2} + \cdots + \frac{{\partial f_{i} }}{{\partial x_{m} }}\Delta x_{m} \\ & \quad + \frac{{\partial^{2} f_{i} }}{{\partial x_{1}^{2} }} \cdot \frac{{\left( {\Delta x_{1} } \right)^{2} }}{2!} + \frac{{\partial^{2} f_{i} }}{{\partial x_{2}^{2} }} \cdot \frac{{\left( {\Delta x_{2} } \right)^{2} }}{2!} + \cdots + \frac{{\partial^{2} f_{i} }}{{\partial x_{m}^{2} }} \cdot \frac{{\left( {\Delta x_{m} } \right)^{2} }}{2!} \\ & \quad + 2 \cdot \frac{{\partial^{2} f_{i} }}{{\partial x_{1} \partial x_{2} }} \cdot \frac{{\Delta x_{1} \Delta x_{2} }}{2!} + 2 \cdot \frac{{\partial^{2} f_{i} }}{{\partial x_{1} \partial x_{3} }} \cdot \frac{{\Delta x_{1} \Delta x_{3} }}{2!} + \cdots + 2 \cdot \frac{{\partial^{2} f_{i} }}{{\partial x_{1} \partial x_{m} }} \cdot \frac{{\Delta x_{1} \Delta x_{m} }}{2!} \\ & \quad + 2 \cdot \frac{{\partial^{2} f_{i} }}{{\partial x_{2} \partial x_{3} }} \cdot \frac{{\Delta x_{2} \Delta x_{3} }}{2!} + \cdots + 2 \cdot \frac{{\partial^{2} f_{i} }}{{\partial x_{2} \partial x_{m} }} \cdot \frac{{\Delta x_{2} \Delta x_{m} }}{2!} + \cdots 2 \cdot \frac{{\partial^{2} f_{i} }}{{\partial x_{m - 1} \partial x_{m} }} \cdot \frac{{\Delta x_{m - 1} \Delta x_{m} }}{2!} + \cdots \\ \end{aligned}$$
(10)

Hence, the following will be obtained:

$$\begin{aligned} \Delta y_{i} & = f_{i} (x_{1} + \Delta x_{1} ,x_{2} + \Delta x_{2} , \ldots ,x_{m} + \Delta x_{m} ) - f_{i} (x_{1} ,x_{2} , \ldots ,x_{m} ) \\ & = \sum\limits_{j = 1}^{m} {\frac{{\partial f_{i} }}{{\partial x_{j} }}\Delta x_{j} } + \frac{1}{2}\sum\limits_{j = 1}^{m} {\frac{{\partial^{2} f_{i} }}{{\partial x_{j}^{2} }} \cdot \left( {\Delta x_{j} } \right)^{2} } + \sum\limits_{\begin{subarray}{l} j = 1,k = 2 \\ k > j \end{subarray} }^{m} {\frac{{\partial^{2} f_{i} }}{{\partial x_{j} \partial x_{k} }} \cdot \Delta x_{j} \Delta x_{k} + \cdots } \\ \end{aligned}$$
(11)

If only the term with the first-order derivative is taken into account then, after absolute values are introduced to make individual equation terms independent of the sign of the derivative values, a relation described by formula (7) will be obtained. If the terms with the second-order derivatives are also taken into account then an equation defining the uncertainty by the second-order total differential method TDM2 will be formulated:

$$\Delta y_{i} = \sum\limits_{j = 1}^{m} {\left| {W_{ij} \cdot \Delta x_{j} } \right|} + \frac{1}{2}\sum\limits_{j = 1}^{m} {\left| {W_{ijj}^{(2)} \cdot \Delta x_{j}^{2} } \right|} + \sum\limits_{\begin{subarray}{l} j = 1,k = 2 \\ k > j \end{subarray} }^{m} {\left| {W_{ijk}^{(2)} \cdot \Delta x_{j} \cdot \Delta x_{k} } \right|}$$
(12)

where \(W_{ijk}^{(2)} = \left. {\frac{{\partial^{2} y_{i} }}{{\partial x_{j} \partial x_{k} }}} \right|_{{x_{j} = x_{j(0)} ,x_{k} = x_{k(0)} }}\), i = 1, …, n and j, k = 1, …, m.

Coefficients \(W_{ijk}^{(2)}\) are coefficients of the second-order sensitivity of the ith quantity to the jth and kth parameter. In qualitative terms, the difference between the TDM and TDM2 methods has been illustrated in Fig. 4. For linear models yi= fi(xj), this method becomes identical with the extreme values method and the first-order total differential method.

Fig. 4
figure 4

Illustration of the difference between uncertainties determined by the total differential methods TDM and TDM2

Equation (10) may also be used to derive formulas for determining uncertainty with taking into account the higher-order terms. However, this is of limited practical importance in real applications. For functions of multiple variables, the number of partial derivatives (sensitivity coefficients) becomes very big. As an example: two first-order and three second-order sensitivity coefficients have to be determined for a function of two variables, while for a function of six variables, the numbers of such coefficients will rise to 6 and 21, respectively (the number of the second-order coefficients will be equal to the number of 2-combination with repetitions on an m-element set). It should also be noted that if the uncertainty is determined by such a method with using total differentials of an order higher than 1 (one) then the uncertainty value obtained will always be raised and this will considerably reduce the usefulness of the said method.

Finite-Difference Method (FDM)

The finite-difference method of uncertainty calculation is in practice a simplified version of the total differential method. Here, the partial derivatives do not have to be determined in analytical form. As it is in the TDM case, the uncertainty formula is derived by expanding the function into a Taylor series (see Eq. 10), with the series being confined to first-order terms only. The partial derivative (sensitivity coefficient) values are estimated with using a difference quotient and replacing the derivative with the ratio of increments:

$$\frac{{\partial y_{i} }}{{\partial x_{j} }} \approx \frac{{\delta y_{i} }}{{\delta x_{j} }} = \frac{{f_{i} (x_{j} + \delta x_{j} ) - f_{i} (x_{j} )}}{{\delta x_{j} }}$$
(13)

where δxj—sufficiently small increment of the xj value; δyj—increment of the function value caused by δxj.

The uncertainty formula has a form similar to that of (7):

$$\Delta y_{i} = \sum\limits_{j = 1}^{m} {\left| {W_{ij}^{\delta } \cdot \Delta x_{j} } \right|} \;\;{\text{where}}\;W_{ij}^{\delta } = \left. {\frac{{\delta y_{i} }}{{\delta x_{j} }}} \right|_{{x_{j} = x_{j(0)} }} ,\quad i = 1, \ldots ,n$$
(14)

For linear models yi= fi(xj), this method intrinsically becomes identical with the methods presented previously.

Here, the option of determining the uncertainty as a vector sum of uncertainty components is also used, as it is in the TDM case:

$$\Delta y_{i} = \sqrt {\sum\limits_{j = 1}^{m} {\left( {W_{ij}^{\delta } \cdot \Delta x_{j} } \right)^{2} } }$$
(15)

The δxj value is arbitrarily selected (therefore, adequate experience of the person who runs the calculations would be welcome). It should be such that the partial derivative value could be satisfactorily approximated. According to [8], the δxj value should be initially assumed as about 0.01xj(0) and then gradually reduced, if necessary, until it no longer affects the uncertainty level Δyj obtained.

Gauss Probabilistic Method (PrM)

The uncertainty determination methods described above are categorized as deterministic. In such an approach, any combination of values xj falling into intervals xj(0) ±Δ xj, j = 1, …, m is considered as equally probable. In consequence, the uncertainty of calculations may be overestimated. To take into account the fact that some variants of such combinations (e.g. a situation that all the xj values would be at the ends of intervals xj(0)±Δ xj) may occur with a low probability, the probabilistic nature of the quantities under analysis should be regarded.

In the probabilistic methods, an assumption is made that the components of vector x: xj, j = 1, …, m are random variables with known probability distributions. In consequence, the components of vector y: yi, i = 1, …, n defined by a functional relation y = f(x) are also random variables and the probability distribution of vector x determines the distribution of vector y. However, the analytical determination of the latter when the numbers of components of vectors x and y exceed 2 and the functional vector f is non-linear is a complicated problem, solvable in some specific cases only. In the applications under consideration, therefore, it is justified to use a simplified method, which may be found in the literature items dealing with measurement uncertainty, including [16], or analyses of accident situations, such as [7] or [8], in the calculus of errors, such a method is referred to as “Gauss method” or just “statistical method”.

The said method is based on the following assumption: if the quantity to be found is a function of vector x: y = f(x) and the components of vector x: xj, j = 1, …, m are described as independent random variables with normal probability distribution \(N_{xj} (\bar{x}_{j} ,\sigma_{xj} )\), where \(\bar{x}\) is the mean value and \(\sigma_{x}\) is the standard deviation, then yi, i = 1, …, n is a random variable with normal probability distribution \(N_{yi} (\bar{y}_{i} ,\sigma_{yi} )\) and the mean value \(\bar{y}_{i}\) is a function of the mean values of vector x components:

$$\bar{y}_{i} = f_{i} (\bar{x}_{1} ,\bar{x}_{2} , \ldots ,\bar{x}_{m} ),\quad i = 1, \ldots ,n$$
(16)

The standard deviation \(\sigma_{yi}\) may be expressed by the following formula (identical with the formula of combined standard uncertainty [16]:

$$\sigma_{yi} = \sqrt {\sum\limits_{j = 1}^{m} {\left( {\frac{{\partial y_{i} }}{{\partial x_{j} }} \cdot \sigma_{xj} } \right)^{2} } } \quad {\text{for}}\;x_{j} = \bar{x}_{j}$$
(17)

The uncertainty of the quantity to be found may be determined for any confidence level.

Method Based on the Description of stochastic processes (PrStM)

This method is a generalization of the PrM method. It may be employed when the mathematical model is explicitly dependent on time. In general terms, such a model is a system of differential equations having the following general form:

$${\dot{\mathbf{y}}} = {\mathbf{F}}({\mathbf{y}},t)$$
(18)

where y = [y1, y2, y3, …, yn]T—vector of state coordinates; F = [f1, f2, f3, …, fn]T—functional vector.

When stochastic processes are introduced to the model, Eq. (18) may take a general form:

$${\dot{\mathbf{y}}} = {\mathbf{F}}({\mathbf{y}},t) + {\mathbf{G}}({\mathbf{y}},t) \cdot {\mathbf{X}}_{t}$$
(19)

where \({\mathbf{G}} = \left[ {\begin{array}{*{20}c} {g_{11} } & \cdots & {g_{1m} } \\ \vdots & \ddots & \vdots \\ {g_{n1} } & \cdots & {g_{nm} } \\ \end{array} } \right]\), gij= gi(yj,t) and Xt= [Xt1, Xt2, Xt3, …, Xtm]T—vector of an m-dimensional stochastic process.

The equation solving methods depend on the equation form and the nature of the stochastic processes. A good point of the approach presented is the fact that the results are obtained in the form of complete probabilistic characteristics of the parameters sought, determined for any instant that may be freely chosen. On the other hand, the difficulty of obtaining an analytical solution makes a serious limitation; significant simplifications (linearization methods, simplifications of the nature of the stochastic processes) are often indispensable even for models that are not very complicated. A necessity also arises to determine characteristics of the stochastic process. In the case of processes compatible with the correlation theory of stochastic processes, the function describing the expected value and the correlation function should be known, while the latter is generally very difficult to be determined. Therefore, the applications of this method to the problems under consideration are very restricted (nevertheless, an example application will be presented in Sect. 4).

Monte-Carlo Method (MCM)

The Monte-Carlo technique is now one of the most powerful computing tools used in analyses of the phenomena and processes that cannot be described by analytical models due to their complexity. It works very well especially in the computational problems where random phenomena should be taken into account. In general terms, its essence lies in repeating an experiment many times with test parameter values being changed at random within a range defined by the specific type of the experiment and the phenomenon examined. Due to the iterative nature of this technique, it is counted among simulation methods. For this reason, the term “Monte-Carlo simulation” can often be found in the literature (see e.g. [8, 9, 26]).

For the issues in question, this method makes it possible to find the probability distributions sought, with using a model predetermined as a function y = f(x), representing the phenomenon under analysis. The components of vector x: xj, j = 1, …, m are assumed to be random variables with known characteristics (determined theoretically or empirically).

The random variables yi, i = 1, …, n are determined by multiple numerical calculations made according to the predetermined relation y = f(x) for computer-generated pseudorandom numbers xj in accordance with appropriate distributions of the specific quantities. This method may also be employed when simulation models are used. With this objective in view, multiple simulations are carried out for randomly generated values of individual model parameters. The possible range of solutions yi is obtained on the grounds of pseudorandom statistical distributions of variables yi, generated as described above. The uncertainty measures are the measures of dispersion of the statistical pseudo-distributions of yi, thus obtained.

This method makes it possible to avoid the difficulties mentioned in sub-items 3.2.5 and 3.2.6. A considerable impact on the correctness of the results obtained is exerted by the quality of the pseudorandom-number generators (measured by the finite quantity of numbers in the generator cycle). Noteworthy is also the fact that, in a degenerated form i.e. in calculations carried out only for the extreme values of xj distributions and at an assumption of monotonicity of yi= f(xj), this method is equivalent to the extreme values method (EVM).

Example Application of the Methods

Calculation of the Uncertainty of Estimation of the Vehicle Stopping Distance

The calculations will be made for one of the standard problems in accident situation analyses, i.e. for the vehicle braking process. This example has also other good points: it may be described by a simple analytical and, simultaneously, good mathematical model. On the other hand, the parameters of this model describe all the components of the man-vehicle-road system and their values are taken, in a significant part, from literature knowledge (they are burdened with statistical uncertainty).

The work with the mathematical model is started from a simplified time history of the process of vehicle braking on an even horizontal road, as shown in Fig. 5. Assuming additionally that the vehicle is braked with the tyre-road adhesion forces being fully utilized, we may state that the maximum braking deceleration value ahm is:

$$a_{hm} = \mu \cdot g$$
(20)

where μ [–]—tyre-road adhesion coefficient (peak or sliding); g ≅ 9.81 m/s2—acceleration of gravity.

Fig. 5
figure 5

Simplified time history of the rectilinear vehicle braking

If the initial braking speed V0 (m/s) (the vehicle speed at the instant t0= 0) and the tr, tn, and ahm values (see Fig. 5) are known then the stopping distance may be expressed by a simplified formula:

$$S_{z} = V_{0} \cdot \left( {t_{r} + \frac{{t_{n} }}{2}} \right) + \frac{{V_{0}^{2} }}{2 \cdot \mu \cdot g}$$
(21)

Thus, a functional relation y = f(x) has been obtained, where x = [x1, x2, x3, x4]T≡ [V0, μ, tr, tn]T and y = [y1] ≡ [Sz]; f = [f1], f1 = x1·(x3+ x4/2)+ x 21 /(2gx2) (with an assumption adopted that the g value is certain). The calculations are made to determine the stopping distance y = [y1] ≡ [Sz] and the uncertainty of determining its value Δy = [Δy1] ≡ [ΔSz], with an assumption adopted that the uncertainty values Δx = [Δx1, Δx2, Δx3, Δx4]T≡ [ΔV0, Δah, Δtr, Δtn]T are known.

The uncertainty will be calculated with using the 7 methods described previously, i.e. EVM, TDM, TDM2, FDM, PrM, PrStM, and MCM for a common data set. The data set adopted has been given in Table 1. It represents typical road conditions, described below. The initial braking speed has been assumed as equal to the speed limit applicable to built-up areas, with a 10% tolerance (as an allowance for e.g. accuracy of speedometer readings and driver’s errors in taking the readings). The tyre-road adhesion coefficient value μ assumed corresponds to dry asphalt road surface; in this case, the uncertainty has been assumed as being quite low—see the data given in the literature dealing with the mechanics of motor vehicle motion and accident reconstruction, e.g. [7, 20]. As regards the total system response time and the braking deceleration rise time, the data have been adopted in a similar way and the parameter values and their uncertainties are at a realistic level.

Table 1 The set of nominal parameter values and their uncertainties adopted for the calculations

In three methods (TDM, TDM2, PrM), appropriate partial derivatives (sensitivity coefficients) must be determined. For the mathematical model described by Eq. (21), they will have the form as given in Table 2.

Table 2 Sensitivity coefficients of the 1st order (\(W_{{S_{z} j}}\)) and 2nd order (\(W_{{S_{z} jk}}^{(2)}\)) and their values for the nominal set of parameters

Upper and Lower Bounds Method (EVM)

According to Eq. (3), the extreme values may be determined from the following formulas (thanks to the simple form of function Sz= f(V0, μ, t0, tn), its monotonicity is known):

$$S_{z{\min} } = V_{0{\min} } \cdot \left( {t_{r{\min} } + \frac{{t_{n{\min} } }}{2}} \right) + \frac{{V_{0{\min} }^{2} }}{{2 \cdot \mu_{{\max} } \cdot g}}$$
(22a)
$$S_{z{\max} } = V_{0{\max} } \cdot \left( {t_{r{\max} } + \frac{{t_{n{\max} } }}{2}} \right) + \frac{{V_{0{\max} }^{2} }}{{2 \cdot \mu_{{\min} } \cdot g}}$$
(22b)

where in terms of symbols: xjmin= xj(0) −Δ xj, xjmax= xj(0)+Δ xj.

For the comparability with the other methods to be maintained, the uncertainty has been assumed as a half of the difference between Szmax and Szmin:

$$\Delta S_{z} = \left( {S_{z{\max} } {-} \, S_{z{\min} } } \right)/2$$
(23)

The relative uncertainty is the ratio of (23) to the arithmetic average of Szmax and Szmin:

$$\frac{{\Delta S_{z} }}{{S_{z} }} = \frac{{\Delta S_{z} }}{{\frac{1}{2}\left( {S_{z{\max} } + S_{z{\min} } } \right)}} = \frac{{\left( {S_{z{\max} } - S_{z{\min} } } \right)}}{{\left( {S_{z{\max} } + S_{z{\min} } } \right)}}$$
(24)

Total Differential Method (TDM)

Here, the following relations hold:

Nominal value:

$$S_{z(0)} = V_{0(0)} \cdot \left( {t_{r(0)} + \frac{{t_{n(0)} }}{2}} \right) + \frac{{V_{0(0)}^{2} }}{{2 \cdot \mu_{(0)} \cdot g}}$$
(25)

Maximum uncertainty (TDMM):

$$\Delta S_{z} = \left| {\frac{{\partial S_{z} }}{{\partial V_{0} }} \cdot \Delta V_{0} } \right| + \left| {\frac{{\partial S_{z} }}{\partial \mu } \cdot \Delta \mu } \right| + \left| {\frac{{\partial S_{z} }}{{\partial t_{r} }} \cdot \Delta t_{r} } \right| + \left| {\frac{{\partial S_{z} }}{{\partial t_{n} }} \cdot \Delta t_{n} } \right|$$
(26)

Mean square uncertainty (TDMS):

$$\Delta S_{z} = \sqrt {\left( {\frac{{\partial S_{z} }}{{\partial V_{0} }}} \right)^{2} \cdot \Delta V_{0}^{2} + \left( {\frac{{\partial S_{z} }}{\partial \mu }} \right)^{2} \cdot \Delta \mu^{2} + \left( {\frac{{\partial S_{z} }}{{\partial t_{0} }}} \right)^{2} \cdot \Delta t_{0}^{2} + \left( {\frac{{\partial S_{z} }}{{\partial t_{n} }}} \right)^{2} \cdot \Delta t_{n}^{2} }$$
(27)

Second-Order Total Differential Method (TDM2)

The nominal value is defined by formula (25). Based on (11), the uncertainty is described by the following equation:

$$\begin{aligned} \Delta S_{z} & = \left| {\frac{{\partial S_{z} }}{{\partial V_{0} }} \cdot \Delta V_{0} } \right| + \left| {\frac{{\partial S_{z} }}{\partial \mu } \cdot \Delta \mu } \right| + \left| {\frac{{\partial S_{z} }}{{\partial t_{r} }} \cdot \Delta t_{r} } \right| + \left| {\frac{{\partial S_{z} }}{{\partial t_{n} }} \cdot \Delta t_{n} } \right| \\ & \quad + \frac{1}{2}\left( {\left| {\frac{{\partial^{2} S_{z} }}{{\partial V_{0}^{2} }} \cdot \Delta V_{0}^{2} } \right| + \left| {\frac{{\partial^{2} S_{z} }}{{\partial \mu^{2} }} \cdot \Delta \mu^{2} } \right| + \left| {\frac{{\partial^{2} S_{z} }}{{\partial t_{r}^{2} }} \cdot \Delta t_{r}^{2} } \right| + \left| {\frac{{\partial^{2} S_{z} }}{{\partial t_{n}^{2} }} \cdot \Delta t_{n}^{2} } \right|} \right) \\ & \quad + \left| {\frac{{\partial^{2} S_{z} }}{{\partial V_{0} \partial \mu }} \cdot \Delta V_{0} \cdot \Delta \mu } \right| + \left| {\frac{{\partial^{2} S_{z} }}{{\partial V_{0} \partial t_{r} }} \cdot \Delta V_{0} \cdot \Delta t_{r} } \right| + \left| {\frac{{\partial^{2} S_{z} }}{{\partial V_{0} \partial t_{n} }} \cdot \Delta V_{0} \cdot \Delta t_{n} } \right| \\ & \quad + \left| {\frac{{\partial^{2} S_{z} }}{{\partial \mu \partial t_{r} }} \cdot \Delta \mu \cdot \Delta t_{r} } \right| + \left| {\frac{{\partial^{2} S_{z} }}{{\partial \mu \partial t_{n} }} \cdot \Delta \mu \cdot \Delta t_{n} } \right| + \left| {\frac{{\partial^{2} S_{z} }}{{\partial t_{r} \partial t_{n} }} \cdot \Delta t_{r} \cdot \Delta t_{n} } \right| \\ \end{aligned}$$
(28)

Finite-Difference Method (FDM)

The nominal value is defined by formula (25). Based on (14) and (15), the uncertainty is described by the following equations:

Maximum uncertainty (FDMM):

$$\Delta S_{z} = \left| {\frac{{\delta S_{z} }}{{\delta V_{0} }} \cdot \Delta V_{0} } \right| + \left| {\frac{{\delta S_{z} }}{\delta \mu } \cdot \Delta \mu } \right| + \left| {\frac{{\delta S_{z} }}{{\delta t_{r} }} \cdot \Delta t_{r} } \right| + \left| {\frac{{\delta S_{z} }}{{\delta t_{n} }} \cdot \Delta t_{n} } \right|$$
(29)

Mean square uncertainty (FDMS):

$$\Delta S_{z} = \sqrt {\left( {\frac{{\delta S_{z} }}{{\delta V_{0} }}} \right)^{2} \cdot \Delta V_{0}^{2} + \left( {\frac{{\delta S_{z} }}{\delta \mu }} \right)^{2} \cdot \Delta \mu^{2} + \left( {\frac{{\delta S_{z} }}{{\delta t_{0} }}} \right)^{2} \cdot \Delta t_{0}^{2} + \left( {\frac{{\delta S_{z} }}{{\delta t_{n} }}} \right)^{2} \cdot \Delta t_{n}^{2} }$$
(30)

and

$$\frac{{\delta S_{z} }}{{\delta x_{j} }} = \frac{{S_{z} (x_{j(0)} + \delta x_{j} ) - S_{z} (x_{j(0)} )}}{{\delta x_{j} }},\quad j = 1, \ldots ,4$$
(31)

(the other parameters xk, k = 1, …, 4 and k ≠ j take nominal values xk(0)).

The values of increments δxj, j = 1, …, 4 have been assumed as recommended in [8], i.e. δxj= 0.01xj(0).

Gauss Method (PrM)

The mean value is as defined by formula (21):

$$\overline{{S_{z} }} = \overline{{V_{0} }} \cdot \left( {\overline{{t_{0} }} + \frac{{\overline{{t_{n} }} }}{2}} \right) + \frac{{\overline{V}_{0}^{2} }}{{2 \cdot \overline{\mu } \cdot g}}$$
(32)

where symbol “¯” indicates the average value of the distribution of the specific parameter, i.e. \(\overline{{V_{0} }} = V_{0(0)} , \, \overline{\mu } = \mu_{(0)} , \, \overline{{t_{r} }} = t_{r(0)} ,\overline{{ \, t_{n} }} = t_{n(0)}\).

Based on (17), the standard deviation of random variable Sz is:

$$\sigma_{{S_{z} }} = \sqrt {\left( {\frac{{\partial S_{z} }}{{\partial V_{0} }}} \right)^{2} \cdot \sigma_{{V_{0} }}^{2} + \left( {\frac{{\partial S_{z} }}{\partial \mu }} \right)^{2} \cdot \sigma_{\mu }^{2} + \left( {\frac{{\partial S_{z} }}{{\partial t_{r} }}} \right)^{2} \cdot \sigma_{{t_{0} }}^{2} + \left( {\frac{{\partial S_{z} }}{{\partial t_{n} }}} \right)^{2} \cdot \sigma_{{t_{n} }}^{2} }$$
(33)

The standard deviations of random variables V0, μ, tr, and tn have been assumed as 1/3 of the uncertainties of these parameters, i.e. \(\sigma_{{V_{0} }} = \Delta V_{0} /3,\sigma_{\mu } = \Delta \mu /3,\sigma_{{t_{r} }} = \Delta t_{r} /3,\sigma_{{t_{n} }} = \Delta t_{n} /3\).

The absolute uncertainty and relative uncertainty are as follows (see also [16]):

$$\begin{aligned} \Delta S_{z} & = 3 \cdot \sigma_{{S_{z} }} \;{\text{at a confidence level of }}\;99.7\% ; \\ \Delta S_{z} & = 2 \cdot \sigma_{{S_{z} }} \;{\text{at a confidence level of }}\;95.4\% ; \\ \Delta S_{z} & = \sigma_{{S_{z} }} \;{\text{at a confidence level of}}\;68.3\% \\ \end{aligned}$$
(34)
$$\frac{{\Delta S_{z} }}{{S_{z} }} = \frac{{\Delta S_{z} }}{{\overline{S}_{z} }}.$$
(35)

Method Based on the Description of Stochastic Processes (PrStM)

The time history of the vehicle braking deceleration has been assumed as having a form similar to that adopted previously (see Fig. 6). Three characteristic phases have been discerned, taking place in the time intervals denoted by tr, tn, and ta, where ta represents the time of braking with the braking force being fully developed. It has been assumed that in the third phase, the braking deceleration is a sum of a defined function of time f(t) (a “trend”) and a stochastic process Xa(t):

Fig. 6
figure 6

Time history of braking deceleration, with random departures from a steady trend

$$\ddot{x} = F(t)\quad {\text{where}}\;F\left( t \right) = f\left( t \right) + X_{a} (t)$$
(36)

Moreover, it has been assumed that:

  • Xa(t)—stationary (in the broad sense) normal stochastic process with mean value of mXa, variance of \(v_{{X_{a} }} = \sigma_{{X_{a} }}^{2}\), and known correlation function KXa(τ);

  • trend is a function having the following form:

$$f\left( t \right) = A \cdot t + B$$
(37)

where A and B—coefficients; in general, they are random variables with normal distribution; A: N(mA, \(\sigma_{A}^{2}\)), B: N(mB, \(\sigma_{B}^{2}\)); m—mean value and σ—standard deviation.

The following initial conditions apply to Eq. (36):

$$\dot{x}(t = 0) = V_{p} ,\;\;x(t = 0) = S_{p}$$
(38)

where in general, Vp and Sp are random variables with normal distribution; Vp: \(N(m_{{V_{p} }} , \, \sigma_{{V_{p} }}^{2} )\), Sp: \(N(m_{{S_{p} }} , \, \sigma_{{S_{p} }}^{2} )\).

If the system response phase (tr) and deceleration rise phase (tn) are taken into account and the inequality 0 ≤ t′ < tr+ tn holds, then Vp and Sp become dependent random variables.

A complete description of the solution shown above may be found in [14]. Without going into detail, the solutions obtained in this case may be proven to be normal random processes. Figure 7 shows time histories of the solutions in the form of mean values of the distance travelled (S), vehicle speed (V), and longitudinal vehicle acceleration (a) and the corresponding time histories of standard deviations σS, σV, and σa. These curves have been obtained for the parameter values corresponding to the data given in Table 1:

Fig. 7
figure 7

Time histories of longitudinal acceleration a (expected value of the trend) and expected values of speed V and distance S, with 3-σ dispersion fields plotted with dotted lines (a), and corresponding standard deviation vs time curves (b)

  • mA= 0 m/s3, σA= 0.0 m/s3; mB= − 6.83 m/s2, σB= 0.164 m/s2;

  • \(m_{{V_{p} }}\)= 12.9 m/s (46.3 km/h), \(\, \sigma_{{{\text{V}}_{\text{p}} }}\) = 0.48 m/s (1.72 km/h); \(m_{{S_{0} }}\)= 22.1 m, \(\, \sigma_{{{\text{S}}_{ 0} }}\) = 1.63 m;

  • coefficient of correlation of random variables Sp and Vp: kVS = 0.276; the values describing random variables Sp and Vp have been determined with using an analytical model of braking in rectilinear motion for the period tr+ tn (see Figs. 5 and 6a) and with employing the Gauss method.

Parameters of random process Xa(t):

  • mXa= 0 m/s2;

  • correlation function form: \(K_{{X_{a} }} (\tau ) = v \cdot e^{ - u \cdot \left| \tau \right|} \cos \omega \tau\)

    where τ—time difference; v = 0.01 m2/s4; u = 4.4 s−1, ω = 20 s−1 (the form of the function and coefficient values have been selected on the grounds of experimental test results [14]).

Monte-Carlo Method (MCM)

For these calculations, a special computer program MCM has been written, where the function described by formula (21) has also been implemented. The model parameters values V0, μ (or ahm), tr, and tn may be generated as random (pseudorandom) numbers whose distributions would be programmed as functions (e.g. normal, exponential, or uniform) or be empirically based. The data taken for calculations corresponded to those specified in Table 1. The histograms representing the distribution of stopping distance Sz have been shown in Figs. 8 and 9. Figure 8 describes the situation where all the data (V0, tr, tn, μ) were treated as random variables with truncated normal distribution (the numbers generated could not differ from the mean by more than treble standard deviation). The situation with these data being treated as random variables with uniform (rectangular) distribution has been illustrated in Fig. 9. As it can be seen, the results in both cases resemble in their shape the curve representing a truncated normal distribution, but with different standard deviation values (higher in the latter case). Actually, however, none of the distributions can be considered a truncated normal one. In both cases, it can be seen that the distribution curve is slightly asymmetric, with the mode being shifted towards the lower Sz values. This is due to a nonlinearity of the relation represented by Eq. (20).

Fig. 8
figure 8

Histogram of stopping distance Sz for input data with normal distribution

Fig. 9
figure 9

Histogram of stopping distance Sz for input data with uniform distribution

Summary of the Results

The calculation results have been summarized in Table 3. They will be discussed in Sect. 4.2.

Table 3 Summarized results of calculations carried out to test 7 uncertainty estimation methods with using the data of Table 1 and a mathematical model described by Eq. (21)

Comparison Between the Seven Methods Used

To facilitate the comparison between the uncertainties estimated with using different methods, the calculation results specified in Table 3 have been presented graphically in Fig. 10 in the form of stopping distance ranges. The following conclusions may be drawn from the results presented:

Fig. 10
figure 10

Comparison of the possible solution ranges at determining the stopping distance Sz for the adopted set of input data (Table 1) and 7 uncertainty estimation methods (EVM—extreme values method, TDM—total differential method, TDM2—second-order total differential method, FDM—finite-difference method, PrM—Gauss probabilistic method, PrStM—probabilistic method based on the description of stochastic processes, MCM—Monte-Carlo probabilistic method; defining symbols added to “TDM” and “FDM”: M—maximum uncertainty, S—mean square uncertainty; defining symbols added to MCM: n—data with normal distribution, u—data with uniform distribution)

  • The ranges of the solutions obtained with using the deterministic methods where the maximum uncertainty is estimated (EVM, TDMM, TDM2, FDMM) do not considerably differ from each other. The highest value has been obtained for the TDM2 method, where the uncertainty is bigger by about 6% than the uncertainty calculated for the TDM method.

  • The results calculated with using the probabilistic methods PrM, PrStM significantly differ from those obtained from the deterministic methods. The stopping distance ranges are much narrower, which is advantageous from the point of view of usefulness in accident reconstruction.

  • A similar effect may be obtained by using deterministic methods and calculating the mean square uncertainty (TDMS, FDMS).

  • The ranges determined by the Gauss probabilistic method (PrM) and the probabilistic method based on the description of stochastic processes (PrStM) are close to each other. This means that in the case under consideration and in similar problems, the PrM method, being relatively simple in comparison with the PrStM, will be sufficient for determining the probability distribution of the quantity sought.

  • The ranges determined by the Monte-Carlo probabilistic method (MCM) depend on types of the data probability distribution. In general, they are wider than those obtained from the other probabilistic methods (PrM, PrStM). When the input data are treated as random variables with uniform distribution (MCM-u), the range calculated is close to that determined with using the deterministic methods EVM, TDMM, and FDMM. For the data being treated as having normal distribution (MCM-n), a narrower range has been obtained, which may be interpreted as an effect of coming closer to the PrM and PrStM methods. Hence, a hypothesis may be formulated that he MCM method is a compromise between the deterministic methods TDM and EVM and the probabilistic methods PrM and PrStM in terms of both their applicability and the reliability of the results obtained. The MCM method may also be considered a good reference for verifying the results obtained with using other methods.

Based on the results obtained in the calculation example, a statement may be made that the introduction of data uncertainty to the calculations causes big differences between the results of such calculations and the “nominal” results (i.e. the results obtained without taking the data uncertainty into account). Such an effect can be observed for each of the methods used to estimate the impact of the said inaccuracies.

In consideration of the above and the fact that the data uncertainty values taken for the calculations were not too high, the following general conclusion may be drawn: a failure to take the data uncertainty into account may result in the construction of an untrue hypothesis about the course of a specific accident situation and the wrong hypothesis may translate into unfair legal consequences to be borne by the participants in such a situation. As an example: if, say, the minimum safe value of the distance between the vehicle and the obstacle at the initial instant were 40 m then, without taking the uncertainty into account, a judgment might be formulated that the driver should manage to stop the vehicle and the collision would not take place. The taking of the uncertainty into account, regardless of the method of determining it, would cause such a statement to be unprovable.

It is difficult to show unambiguously which of the uncertainty determination methods should be considered the best. The selection depends to a considerable extent on the model (simulation or analytical) adopted to analyse the phenomenon observed and on the determinability of the input data (e.g. parameters of the random data distribution). To select the method, the limitations of each of them described in Sect. 3 and the above conclusions drawn from results of the example application of the methods should be taken into consideration.

Conclusion

The calculations carried out at the accident reconstruction are burdened with uncertainty. A failure to take the uncertainty into account in the calculations may considerably affect the expert’s opinion about the course of the incident under analysis. Correct determination of the uncertainty of calculation results and, then, of the opinion as a whole will improve the reliability of the opinion.

In this study, the problems related to determining the uncertainty of calculation results have been discussed. For the tools having the form of mathematical models of vehicle dynamics, a set of methods have been presented that make it possible to determine the uncertainty of calculation results stemming from the uncertainty of the data taken as an input. The uncertainty determining methods available, known in great measure from the area of uncertainty in metrology, are characterized by very different degrees of complexity and by their usefulness to the computing tools used. The example calculations made for the models used at accident reconstruction have shown that each of the methods may produce results differing, even significantly, from each other. It seems reasonable that only the methods should be used that, apart from being applicable to the specific tool employed for the analysis, would enable the obtaining of the lowest uncertainty of the expert’s opinion. Unfortunately, they are usually of the probabilistic type. For such methods to be employed, at least the statistical distributions of the quantities taken as the input data must be known. Such a requirement is often difficult to be met in the case of the parameters used for calculations related to accident reconstruction. There is a need to determine the distributions of this kind because of the lack of such data even for the most fundamental parameters used by forensic experts, such as e.g. driver reaction time.

References

  1. Ball JK, Danaher DA, Ziernicki RM (2007) Considerations for applying and interpreting Monte Carlo simulation analyses in accident reconstruction. SAE Technical Paper 2007-01-0741. https://doi.org/10.4271/2007-01-0741

  2. Bartlett W, Fonda A (2003) Evaluating uncertainty in accident reconstruction with finite differences. SAE Technical Paper 2003-01-0489. https://doi.org/10.4271/2003-01-0489

  3. Bartlett W, Wright W, Masory O, Brach R et al (2002) Evaluating the uncertainty in various measurement tasks common to accident reconstruction. SAE Technical Paper 2002-01-0546. https://doi.org/10.4271/2002-01-0546

  4. Bastien C, Wellings R, Burnett B (2018) An evidence based method to calculate pedestrian crossing speeds in vehicle collisions (PCSC). Accid Anal Prev 118(2018):66–76. https://doi.org/10.1016/j.aap.2018.05.020

    Article  Google Scholar 

  5. Brach Raymond M (1994) Uncertainty in accident reconstruction calculation. SAE Technical Paper 940722. doi:10.4271/940722

  6. Brach RM (2007) Design of experiments and parametric sensitivity of planar impact mechanics. Proceedings of the 16th annual congress of the European Association for accident research and analysis (EVU). Institute of Forensic Research Publishers, Cracow 2007, pp 9–21

  7. Brach RM, Brach M (2005) Vehicle accident analysis and reconstruction methods. SAE International, Warrendale

    MATH  Google Scholar 

  8. Brach RM, Dunn PF (2009) Uncertainty analysis for forensic science. Lawyers and Judges Publishing Company Inc, Tucson

    Google Scholar 

  9. Brach RM, Brach Raymond M, Louderback A (2012) Uncertainty of CRASH3 ΔV and energy loss for frontal collisions. SAE Technical Paper 2012-01-0608. https://doi.org/10.4271/2012-01-0608

  10. Daily J (2009) Monte Carlo techniques for correlated variables in crash reconstruction. SAE Technical Paper 2009-01-0104. https://doi.org/10.4271/2009-01-0104

  11. Davis GA (2015) A comparison of bayesian speed estimates from rollover and critical speed methods. SAE Technical Paper 2015-04-14. https://doi.org/10.4271/2015-01-1434

  12. Fleck G, Daily J (2010) Sensitivity of Monte Carlo modeling in crash reconstruction. SAE Technical Paper 2010-01-0071. https://doi.org/10.4271/2010-01-0071

  13. Fonda AG (2004) The effects of measurement uncertainty on the reconstruction of various vehicular collisions. SAE Technical Paper 2004-01-1220. https://doi.org/10.4271/2004-01-1220

  14. Guzek M (2000) Analiza prostoliniowego hamowania samochodu jako procesu stochastycznego (Analysis of rectilinear motor vehicle braking as a stochastic process). Zeszyty Naukowe Politechniki Świętokrzyskiej. Mechanika 71:147–156 (in Polish)

    Google Scholar 

  15. Hansson SO (1994) Decision theory. A brief introduction. Royal Institute of Technology (KTH), Stockholm

    Google Scholar 

  16. JCGM 100: guide to expression of uncertainty in measurement (1993) ISO, Geneva. http://www.bipm.org/en/publications/guides/gum.html. Accessed 7 Oct 2019

  17. Kost G, Werner S (1994) Use of Monte Carlo simulation techniques in accident reconstruction. SAE Technical Paper 940719. doi:10.4271/940719

  18. Liu Q, Liu J, Wu X, Han X, Cao L, Guan F (2019) An inverse reconstruction approach considering uncertainty and correlation for vehicle-vehicle collision accidents. Struct Multidiscip Optim 60(2):681–698. https://doi.org/10.1007/s00158-019-02231-9

    Article  Google Scholar 

  19. Lozia Z, Guzek M (2005) Uncertainty study of road accident reconstruction—computational methods. SAE TP 2005- 01-1195. A separate booklet (also published in SAE Special Publication SP-1930 “Accident Reconstruction 2005”, pp 163–178 and SAE Transactions 2005 “Journal of passenger cars: mechanical systems 2005”. Vol 114, No 6, pp 1342–1356, as well as a sub-chapter in M. S. Varat: Crash Reconstruction Research: 20 Years of Progress (1988–2007)” SAE International 2008, pp 615–629). https://doi.org/10.4271/2005-01-1195

  20. Mitschke M, Wallentowitz H (2004) Dynamik der Kraftfahrzeuge. Springer, Berlin

    Book  Google Scholar 

  21. Pride R, Giddings D, Richens D, McNally DS (2013) The sensitivity of the calculation of ΔV to vehicle and impact parameters. Accid Anal Prev 55C:144–153. https://doi.org/10.1016/j.aap.2013.03.002

    Article  Google Scholar 

  22. Tubergen G (1995) The technique of uncertainty analysis as applied to the momentum equation for accident reconstruction. SAE Technical Paper 950135. doi:10.4271/950135

  23. Vangi D, Gulino M, Cialdai C (2019) Coherence assessment of accident database kinematic data. Accid Anal Prev 123:356–364. https://doi.org/10.1016/j.aap.2018.12.004

    Article  Google Scholar 

  24. Wach W (2013) Structural reliability of road accidents reconstruction. Forensic Sci Int 228(1):83–93. https://doi.org/10.1016/j.forsciint.2013.02.026

    Article  Google Scholar 

  25. Wach W (2013) Uncertainty in calculations using Lambourn’s critical speed procedure. SAE Technical Paper 2013-01-0779. https://doi.org/10.4271/2013-01-0779

  26. Wach W, Unarski J (2007) Uncertainty analysis of the preimpact phase of a pedestrian collision. SAE Technical Paper 2007-01-0715. https://doi.org/10.4271/2007-01-0715

  27. Wood D, O’Riordain S (1994) Monte Carlo simulation methods applied to accident reconstruction and avoidance analysis. SAE Technical Paper 940720. doi:10.4271/940720

  28. Zou Tiefang Yu, Zhi CM, Jike L (2010) Two non-probabilistic methods for uncertainty analysis in accident reconstruction. Forensic Sci Int 198:134–137. https://doi.org/10.1016/j.forsciint.2010.02.006

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Marek Guzek.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Guzek, M., Lozia, Z. Computing Methods in the Analysis of Road Accident Reconstruction Uncertainty. Arch Computat Methods Eng 28, 2459–2476 (2021). https://doi.org/10.1007/s11831-020-09462-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11831-020-09462-w