Dimensionality reduction and identification of valid parameter bounds for the efficient calibration of automated driving functions
 101 Downloads
Abstract
The industrialization of automated driving functions according to level 3 requires an efficient test and calibration concept to deal with an increased complexity, growing customer demands, and a larger vehicle fleet offered. Therefore, a method for a complexity reduction of the calibration parameter space is presented. In the twostep approach, a qualitative sensitivity analysis is used to identify valid regions in the search space and subsequently decrease dimensionality based on the parameterspecific global influences. The reduced parameter space and sensitivity information can then serve as a starting point for an efficient calibration process on the target hardware. To examine the method’s potential, our approach is applied to the parameter space of an automated driving function. The results expose clear dependencies between parameters and driving scenarios and allow an exclusion of parameter space dimensions based on sensitivity values. The predefined search space can be narrowed down to valid regions using the parameter range identification approach. Finally, the findings are validated with a quantitative variancebased sensitivity analysis. The validation confirms that our method provides equivalent results with a comparably smaller number of system evaluations.
Keywords
Sensitivity analysis Complexity reduction Automated driving1 Introduction
The upcoming market introduction of highly automated driving functions increases the requirements for the customerorientated calibration of these systems. To deal with increased complexity, virtual testing tools (e.g., software in the loop and hardware in the loop) are used to lower the ratio of vehicle tests and, therefore, save costs and resources. For the parameterization of automated driving functions, optimization algorithms can be applied to obtain a certain system behaviour in dedicated driving situations using a closedloop simulation environment. A reduction and improved system understanding of the highdimensional parameter space previous to the simulative optimization can be valuable to enable a fast convergence towards the global optimum [14]. Moreover, it can serve as a valuable input for the manual calibration on the target hardware.
One commonly used approach to enable a more efficient optimization is the combination of optimization algorithms with sensitivity analyses (see for example [14, 15, 28]. The knowledge about the influence of a parameter on the objective function enables a fast converging optimization procedure and even offers chances to neglect parameters if their sensitivity value is small enough [23]. An objective function is used to evaluate the system behaviour in an optimization problem. Whereas sensitivity analyses were earlier mainly of mathematical interest to analyse the influence of the variables in differential equations, their field of application became larger nowadays [12]. The development of complex control systems and highdimensional, nonlinear models motivated the usage of sensitivity analyses in the vehicle systems area to increase model understanding. In the contributions of Suarez et al. [25] and Wang [27], they use sensitivity analyses to evaluate the influence of design parameters of a vehicle body on the driving behaviour. Another popular area for sensitivity analyses and virtual optimizations is the calibration of powertrain components such as engines and transmissions. Due to its isolated testability on a test bench or as an Xintheloop model, sensitivity measures are calculated to understand the impact of certain design parameters on the powertrain performance (see, for example, [19], Chiang and Stefanoupoulou [7, 18]. A comparative review of different sensitivity analysis methods and their applications is provided in Hamby [10] as well as Iooss et al. [12].
Next to the chance to reduce dimensionality by neglecting parameters with small sensitivities, the optimization space is further bounded by parameterspecific ranges. Due to its potential to increase the effectiveness of optimization algorithms, several bounding approaches have been introduced in the context of system identification. The term ‘bounding’ therein stands for the process of confining parameters of a system, so that the input–output error remains below a certain threshold [16]. Bijan et al. [2] analytically derive valid ranges for input parameters of a genetic algorithm and observed an improved convergence behaviour. The approaches described by Cerone and Regruto [5, 6] enable an identification of valid parameter bounds for nonlinear dynamic control systems by modelling the nonlinear block as a linear combination of polynomials given a bounded output error. A comparative review of further bounding approaches is given by Milanese et al. [16].
The definition of valid parameter bounds is especially important prior to the application of sensitivity analyses. If invalid parameter values are considered, they might lead to an erroneous system behaviour that falsifies resulting sensitivity metrics [22]. However, factor bounds are usually defined empirically if an analytical derivation is not possible. The reason for that is mostly a too high computational effort needed to ascertain exact bounds [16].
In this contribution, we introduce an integrated method for an efficient analytical identification of valid parameter bounds and calculation of sensitivity values for the calibration problem of an automated driving function. By applying a qualitative sensitivity analysis to different regions in the parameter space, we aim to identify invalid areas that would lead to distorted results in the subsequent influence analysis and optimization. The usage of synergies in the sampling plan and the efficiency of the applied sensitivity analysis enable a reasonably small number of system evaluations needed to provide reliable results. Whereas simulative parameter analyses are already established in various areas (e.g., powertrain calibration), parameterizations of automated driving functions are nowadays mostly obtained on the target hardware. The hereafter described method offers a first step towards virtual parameterizations by providing an increased system understanding of the search space and redefining a subspace with the most influential parameters and validated domains.
The remainder of this paper is organized as follows. Section 2 provides the theoretical background for our work including applied sensitivity analyses and related convergence measures. Based on that, Sect. 3 introduces our method for the combined dimensionality reduction and parameter range identification. In Sect. 4, we apply the approach to reduce complexity of the parameter space of a level 3 automated driving function [20]. The results of Sect. 4 are thereafter validated with a comprehensive quantitative sensitivity analysis (Sect. 5), where we crosscheck the impact of different parameter regions. Section 6 finally concludes the paper.
2 Theoretical background
As already mentioned, we use a qualitative sensitivity analysis to examine different regions of the search space and derive valid parameter ranges or influence measures. The findings provided by that are thereafter validated with a quantitative variancebased method with a distinctly larger sampling plan. Finally, we introduce convergence measures to evaluate sufficiency of the sampling size.
2.1 Elementary effects method
Parameters laying in the upper right corner of the plot are thus more influential than in the lower left corner. As mentioned above, the sensitivity metrics are qualitative, i.e., they can be used to compare parameter impacts among each other, but do not allow a direct conclusion about the absolute influence on F(P).
2.2 Variancebased sensitivity analysis
2.3 Convergence measures for sensitivity analyses
The variables \(S_{i}^{\text{ub}}\) and \(S_{i}^{\text{lb}}\) describe upper and lower bounds of the confidence interval for a sensitivity measure to a given confidence level \(\kappa_{\text{Conf}}\).
3 Combined method for the complexity reduction of the calibration problem for automated driving functions
The input block of the chart comprises a simulation environment as the means to evaluate performance of the driving function in a representative scenario catalogue measured by the objective function. The parameters enable an optimization of the driving behaviour and are the main subject of our analysis.
3.1 Sampling approach
As a first step, we need to generate a sampling plan that enables both a derivation of valid parameter bounds and a sensitivity analysis. Since every sampling point requires one simulation run of the whole scenario catalogue, we intend to keep the number of samples as small as possible while still keeping a good coverage of the parameter space. Moreover, we aim to use synergies within the design for both analyses.
To examine the influence range of a parameter i, we intend to divide the initially defined domain defined by the upper bound \({\text{ub}}_{i}\) and lower bound \({\text{lb}}_{i}\) into smaller subregions. Thereafter, we perform sensitivity analyses in the reduced subspaces. The sensitivity value of the factor then provides information about the impact in the respective subrange. Due to its computational efficiency, we use the qualitative elementary effects method (EEM) to locally perform influence analyses. Based on that, we derive a valid search space and calculate global sensitivity metrics based on those radial samples that lay in permitted areas. A more detailed description of the approach is given in Sects. 3.2 and 3.3.
For the subsequent calculation of the global sensitivity indices, we aim to use those radial sampling groups that lay within valid areas of the search space. We, therefore, check all elementary effects in the sampling plans and concatenate them if all of their OAT variations are located in valid areas. Figure 4b illustrates the filtering of radial sampling groups based on their location in the parameter space and the updated valid parameter bounds. Points laying on the dashed lines in Fig. 4b are not considered for the global sensitivity analysis. Since the evaluation of the objective function has already been performed for the first part of the analysis and the sample plans were generated independently, we can use these results without further computational effort.
3.2 Derivation of valid parameter bounds
As already pointed out in Sect. 1, the validity of parameter bounds is crucial for the applicability of sensitivity analysis. We consequently intend to first identify valid parameter ranges and thereafter perform the Morris sensitivity analysis using only valid samples.
3.3 Calculation of scenariospecific sensitivities
The aforementioned parameter range identification serves as a necessary process step to ensure the validity of global sensitivity measures computed in the second step of our method (c.f. Fig. 1). More importantly, the same sampling plan was used as for the hereafter described analysis, which enables a reuse of radial sampling groups (c.f. Sect. 3.1). The goal of the method presented in the following is to improve system understanding for the manual calibration process and potentially exclude parameter space dimensions based on their global influence.
Given the reordered sampling plan as exemplarily illustrated in Fig. 4b, the sensitivity metrics \(\mu_{i}^{*}\) and \(\sigma_{i}\) for the valid search space can be calculated following the Morris method, as described in Sect. 2.1. To enable a comparison of parameter influences among scenarios, we use the normalization approach as introduced in Eqs. (17) and (18). Since the simulation environment and the objective function may contain inaccuracies that complicate the transferability to the real world, the quantitative effect on F(P) would not be of big interest. Even if the absolute influence might change between manoeuvres, the relative sensitivity still provides valuable information about the order of magnitude parameters have in respective scenarios.
Similar to the method in Sect. 3.2, we have the opportunity to exclude unimportant factors from the optimization problem if the respective sensitivity value falls below a previously defined threshold \(s_{\text{min} }^{\text{rel}}\). Note that the threshold for the dimensionality reduction does not necessarily have to be the same as for the parameter range reduction (\(s_{{\text{min} ,{\text{PB}}}}^{\text{rel}} )\).
3.4 Evaluation of the complexity reduction
The twostep method described above provides a reduced valid parameter region based on newly identified bounds on one hand and scenariospecific sensitivities on the other hand. The normalization of influences allows the dimensionality reduction if the parameter’s sensitivity lays below the predefined threshold \(s_{\text{min} }^{\text{rel}}\).
The higher the abovementioned metrics (DR and PSR), the more the parameter space could be confined. It needs to be noted that the metrics serve as a means to compare the complexity reduction among scenarios and do not allow any conclusion to the actual reduction of computational resources or calibration time.
4 Complexity reduction for the calibration of an automated driving function
In this paragraph, we apply the previously presented method to reduce complexity of the calibration problem of an automated driving function. Therefore, we first describe the underlying optimization problem (Sect. 4.1) before presenting results in Sect. 4.2. Finally, in Sect. 4.3, we discuss our approach critically and perform convergence analyses (Sect. 4.4).
4.1 Problem description

Keep lane on a curvy road (M1).

Lane change on the highway (M2).

Acceleration (M3).

Deceleration (M4).

Overtake slower vehicle on the highway (M5).

Lane change and stop (M6).
4.2 Complexity reduction
For the derivation of valid bounds, we follow the algorithm, as described in Fig. 6, and obtain a renewed lower bound for \(P_{1}\) of \({\text{lb}}_{1}^{\text{valid}} = 0.6\) (\({\text{ub}}_{1}^{\text{valid}}\) remains unchanged). Moreover, the upper bound of \(P_{2}\) can be redefined to \({\text{ub}}_{2}^{\text{valid}} = 0.6\) (\({\text{lb}}_{ 2}^{\text{valid}}\) remains unchanged). Parameters \(P_{3}\) and \(P_{4}\) seem to have no influence on F(P), since \(s_{i,k}\) remains zero over the whole domain. Parameters \(P_{5}\)–\(P_{8}\) on the other hand seem to be sensitive towards F(P). However, the final clarification of these assumptions can only be provided by a global sensitivity analysis in the valid parameter space region.
It can be seen that parameters \(P_{1}\)–\(P_{4}\) have comparably small values for the main effect (\(\mu^{*}\)) and low nonlinearities and interdependencies to other parameters (\(\sigma\)value). Parameters \(P_{5}\)–\(P_{8}\) on the other hand lay further in the top right, which implicates higher interdependencies and nonlinearities. Parameter \(P_{8}\) seems to be mostly influential due to its position in the upper right corner. By comparing these findings with the plots in Fig. 7, we notice that the results confirm observations based on the parameter range analysis. The dimensionality reduction and parameter range identification can, therefore, be used as mutual plausibility checks.
Results of the sensitivity analysis and dimensionality reduction method
M1  M2  M3  M4  M5  M6  

\(P_{1}\)  \(s_{{1_{\text{rel}} }}\)  0.0258  0  0  0  0  1 
\(P_{2}\)  \(s_{{2_{\text{rel}} }}\)  0.0651  0.582  1  1  1  0.889 
\(P_{3}\)  \(s_{{3_{\text{rel}} }}\)  0  0.874  0  0  0  0 
\(P_{4}\)  \(s_{{4_{\text{rel}} }}\)  0  1  0  0  0  0 
\(P_{5}\)  \(s_{{5_{\text{rel}} }}\)  0.433  0.449  0  0  0  0 
\(P_{6}\)  \(s_{{6_{\text{rel}} }}\)  0.549  0.96  0  0  0  0 
\(P_{7}\)  \(s_{{7_{\text{rel}} }}\)  0.825  0.681  0  0  0  0 
\(P_{8}\)  \(s_{{8_{\text{rel}} }}\)  1  0.698  0  0  0  0 
DR  0.25  0.125  0.875  0.875  0.875  0.25 
Results of the parameter range identification method
M1  M2  M3  M4  M5  M6  

\(P_{1}\)  \({\text{lb}}_{1}^{\text{valid}}\)  0.6  0  0  0  0  0.4 
\({\text{ub}}_{1}^{\text{valid}}\)  1  1  1  1  1  0.6  
\(P_{2}\)  \({\text{lb}}_{2}^{\text{valid}}\)  0  0.2  0.3  0.3  0  0 
\({\text{ub}}_{2}^{\text{valid}}\)  0.6  1  0.5  0.5  0.5  0.4  
\(P_{3}\)  \({\text{lb}}_{3}^{\text{valid}}\)  0  0  0  0  0  0 
\({\text{ub}}_{3}^{\text{valid}}\)  1  1  1  1  1  1  
\(P_{4}\)  \({\text{lb}}_{4}^{\text{valid}}\)  0  0  0  0  0  0 
\({\text{ub}}_{4}^{\text{valid}}\)  1  1  1  1  1  1  
\(P_{5}\)  \({\text{lb}}_{5}^{\text{valid}}\)  0  0  0  0  0  0 
\({\text{ub}}_{5}^{\text{valid}}\)  1  1  1  1  1  1  
\(P_{6}\)  \({\text{lb}}_{6}^{\text{valid}}\)  0  0  0  0  0  0 
\({\text{ub}}_{ 6}^{\text{valid}}\)  1  1  1  1  1  1  
\(P_{7}\)  \({\text{lb}}_{7}^{\text{valid}}\)  0  0  0  0  0  0 
\({\text{ub}}_{7}^{\text{valid}}\)  1  1  1  1  1  1  
\(P_{8}\)  \({\text{lb}}_{8}^{\text{valid}}\)  0  0  0  0  0  0 
\({\text{ub}}_{8}^{\text{valid}}\)  1  1  1  1  1  1  
PSR  0.76  0.2  0.8  0.8  0.5  0.88 
When analysing the results in Table 1, it becomes obvious that the impact of many parameters changes with regard to the scenario. Whereas scenarios M3–M5 can be optimized solely by parameter \(P_{2}\), the distribution of influential parameters changes for the remaining scenarios. Since these scenarios mainly consist of longitudinal changes of the driving state with low curvature changes, while the others contain larger lateral requirements, the parameter \(P_{2}\) might be especially influential towards the optimization of the longitudinal driving behaviour. Moreover, it becomes clear that some of the parameters (e.g., \(P_{3}\) and \(P_{4}\)) have an exclusive influence on only one manoeuvre, whereas other parameters (e.g., \(P_{1}\), \(P_{2}\), and \(P_{5}\)–\(P_{8}\)) seem to affect more manoeuvres. The different parameter bounds per manoeuvre (c.f. Table 2) finally confirm the potential of the scenariospecific analysis, since influence ranges change depending on the driving situation.
The different values for DR and PSR with respect to the manoeuvre represent the characteristics of our optimization problem. Parameters are not all equally influential and do not have a fixed influence range, instead the valid parameter space for the optimization varies heavily depending on the scenario. The exposition of these findings with our analysis, therefore, simplifies the parameter tuning, since the search space and thus the number of possible parameter combinations can be reduced compared to the initial setup. It should be noted that high values for PSR and DR indicate a reduction of the parameter space, but do not necessarily improve subsequent optimizations. If the relevance threshold is too small, optimization algorithms might still search areas with negligible influences and get stuck in local optima.
4.3 Discussion of the results
The results, as illustrated in Sect. 4.2, imply a grouping of the variables with regard to scenario specifications and a change of parameter influences within different domains. To analyse our findings, we need to look into the chain of effects and understand the structure of the function modules. Many of the tuneable parameters for automated driving functions are located in the control systems of the actuators (e.g., steering system). Alternatively, they are located inside motionplanning modules and serve as, e.g., weighting factors in the cost function of an optimization problem, which is the case for our problem (c.f. Sect. 4.1). Since the implementation of the analysed trajectoryplanning module separates longitudinal and lateral strategies, there may also exist varying sensitivities of the corresponding parameters per scenario. Manoeuvres 3 and 4 are mainly longitudinal manoeuvres, whereas M1 and M2 mostly contain lateral parts. The results expose a larger influence of parameter \(P_{2}\) on longitudinal scenarios, whereas parameters \(P_{3}\)–\(P_{8}\) seem to show a sensitive behaviour towards lateral scenarios. As described before (Sect. 3.2), we chose a very large initial range to make sure that the actual valid range lays in between those initial bounds. The tendency of some parameters to loose impact at outer areas of the ranges can be explained when looking into more detail to the function modules. Very high absolute values for a weighting factor within a cost function could, for example, cause an overcompensation of all remaining factors and thus the planning of unrealistic driving states, which is caught by further constraints of the system. Alternatively, the switch to a fall back emergency mode for an unrealistic planning behaviour is possible. On the other hand, a small value of such a factor would lead to a decreased influence and, therefore, an overcompensation by other factors. Next to the findings in cost functions, a similar behaviour can be observed in control systems when a parameter serves as a gain factor or the affected signal runs into a limiter, so that its influence does not change anymore below or above a certain threshold.
The findings described before confirm the applicability of our approach to the parameter space of an automated driving function. For further driver assistance systems (e.g., adaptive cruise control, lane keeping assistant), a similar performance can be expected, since the structure of the chain of effects is comparable. It mostly comprises control systems and motionplanning modules with tuneable parameters affecting the closedloopdriving behaviour. Internal safety modules preventing undesired driving behaviour caused by invalid parameterizations of the driving function are mostly required by law, so that fall back layers may limit the influential regions as exposed above. The transferability of this concept to other problems with large parameter spaces is generally given. However, the effectiveness strongly depends on the characteristics of the optimization problem. If the objective function is not limited by any constraints, the method might not allow a reduction of the parameter space. Since the chain of effects for many vehicle control systems (e.g., powertrain and suspension system) is similar to automated driving functions with respect to safety restrictions, we may be able to reduce parameter space complexity for these cases.
4.4 Convergence analysis
The results show that valid parameter ranges can already be achieved with a relatively small number of system evaluations. The width of the confidence interval reaches a value of zero after \(n_{\text{PB}} \approx 10000\) which is equivalent to r = 14 as the number of elementary effects in the respective subspace [c.f. Eq. (16)].
It is noticeable that the parameter range identification requires a distinctly larger sampling size than the dimensionality reduction method. These findings align with our method, since we reuse valid radial samples from the initial plan for the global sensitivity analysis. However, the risk is high that due to a great reduction of initial parameter ranges, the number of radial samples for the second part of our method is too small to provide reliable results. Since optimal values for \(n_{\text{EEM}}\) and \(n_{\text{PB}}\) are problemspecific, it is always recommended to perform a convergence analysis after applying the method to ensure the validity of the results. If \(n_{\text{EEM}}\) turns out to be too small, one can generate more OAT samples within the reduced parameter space following the sampling method, as shown in Sect. 2.1.
5 Validation of the results
To validate our findings, we finally apply the quantitative variancebased sensitivity analysis to our problem. Similar to Sect. 4, we use the manoeuvre M1 as a representative for the scenario catalogue. We, therefore, divide the search space into a valid region and an invalid region. The valid region is defined through the bounds \({\text{lb}}_{i}^{\text{valid}}\) and \({\text{ub}}_{i}^{\text{valid}}\), whereas the invalid space represents the regions in the parameter space outside permitted bounds. To evaluate our findings of the global sensitivity analysis, we compute the main (ME) and total effects (TE) with the variancebased sensitivity analysis (c.f. Sect. 2.2) in the permitted subspace and compare them to the relative sensitivities \(s_{i}^{\text{rel}}\) provided by the EEM. In a second step, another quantitative VBSA is performed inside invalid regions of the parameter space. The comparison of sensitivity metrics in the valid and invalid space finally allows the validation of the complexity reduction approach.
When comparing the distributions of main and total effects between Fig. 12a, b, we observe an average decrease of the main effect for \(P_{1}\)–\(P_{4}\) of approximately 50% and a reduction of the total effect by approx. 30%. The distinct reduction of these sensitivity metrics outside of the valid bounds and increase of the influences for \(P_{5}\)–\(P_{8}\) confirm the findings of the parameter range reduction. The fact that the sensitivities of \(P_{1}\) and \(P_{2}\) are not closer to zero might be caused by the less conservative VBSA approach (see above) and the variance and mean computation technique (c.f. Sect. 2.2) applied in this work. The computation based on a combined matrix \(A_{B}^{i}\) out of two independent matrices A and B might cause small approximation errors [13]. The results suggest that by applying the elementary effects method locally in certain regions of the parameter space, we can reliably reduce domains, so that noninfluential areas are neglected. The validity of the findings could be confirmed qualitatively with the VBSA. The comparably high number of system evaluations needed (\(n_{\text{VBSA}} = 16400\)) to achieve convergence underlines the computational efficiency of our approach (\(n_{\text{EEM}} \approx 1000\)).
6 Conclusion
In this contribution, we introduced a twostep method for a combined complexity reduction of the parameter space for the calibration of automated driving functions. To reduce the search space with a minimum number of system evaluations, we locally apply the qualitative Morris analysis to examine the respective subspaces. Thereafter, we reuse valid samples to perform a global influence analysis. Our approach thus offers the opportunity to first narrow down individual domains of parameters and second exclude calibration factors based on their global influence on the objective function. The potential of the method is evaluated by performing a complexity reduction for the parameter space of an automated driving function. We, therefore, formulate a representative scenario catalogue containing six manoeuvres and apply the method to every scenario individually. The results expose a clear dependency of the parameter’s impact and influence range on the characteristics of the scenario. The analysis enables us to reduce the dimensionality by 47.5% and redefine an up to 65.7% smaller parameter space on average with regard to the respective scenario. Next to the distinct reduction of complexity, the sensitivity values provide an improved system understanding and can help to derive a calibration strategy based on a ranked parameter list. The relatively small number of system evaluations (\(n_{\text{PB}} \approx 10000\) or \(n_{\text{EEM}} \approx 1000\)) compared to a variancebased approach (\(n_{\text{VBSA}} \approx 16400)\) underlines the potential of the analysis as a preceding step to the actual parameterization. The application of the method to the problem presented in this work shows that it can save expensive calibration time especially worthwhile on the target hardware.
In further investigations, we intend to evaluate the potential of this approach in combination with an optimization algorithm and the transferability to the vehicle. Moreover, the presented method should be applied to further vehicle control systems and large parameter spaces of other areas (e.g., scenario creation for the validation of automated driving functions) to evaluate its transferability. Additional investigations could concentrate on developing this method further, so that invalid regions not only in the outer areas of the parameter ranges can be derived but over the whole domain.
Notes
Acknowledgements
We would like to express our gratitude to the developers of the department of automated driving at BMW for providing us all relevant information to develop this method.
Compliance with ethical standards
Conflict of interest
The author declared no potential conflict of interest.
References
 1.Bellem, H., Schönenberg, T., Krems, J.F., Schrauf, M.: Objective metrics of comfort: developing a driving style for highly automated vehicles. Transp. Res. Part F Traffic Psychol. Behav. 41, 45–54 (2016)Google Scholar
 2.Bijan, M.G., AlBadri, M., Pillay, P., Angers, P.: Induction machine parameter range constraints in genetic algorithm based efficiency estimation techniques. IEEE Trans. Indus. Appl. 54(5), 4186–4197 (2018)Google Scholar
 3.Campolongo, F., Cariboni, J., Saltelli, A.: An effective screening design for sensitivity analysis of large models. Environ. Modell. Softw. 22(10), 1509–1518 (2007)Google Scholar
 4.Campolongo, F., Saltelli, A., Cariboni, J.: From screening to quantitative sensitivity analysis. A unified approach. Comput. Phys. Commun. 182(4), 978–988 (2011)zbMATHGoogle Scholar
 5.Cerone, V., Regruto, D.: Parameter bounds for discretetime Hammerstein models with bounded output errors. IEEE Trans. Autom. Control 48(10), 1855–1860 (2003)MathSciNetzbMATHGoogle Scholar
 6.Cerone, V., Regruto, D.: Parameter bounds evaluation of Wiener models with noninvertible polynomial nonlinearities. Automatica 42(10), 1775–1781 (2006)MathSciNetzbMATHGoogle Scholar
 7.Chiang, C.J., Stefanopoulou, A.G.: Sensitivity analysis of combustion timing of homogeneous charge compression ignition gasoline engines. J. Dyn. Syst. Meas. Contr. 131(1), 014506 (2009)Google Scholar
 8.DunnRankin, P., Knezek, G.A., Wallace, S.R., Zhang, S.: Scaling methods. Psychology Press, Hove (2014)Google Scholar
 9.Efron, B.: Bootstrap methods: another look at the jackknife. Breakthroughs in Statistics, pp. 569–593. Springer, New York (1992)Google Scholar
 10.Hamby, D.M.: A review of techniques for parameter sensitivity analysis of environmental models. Environ. Monit. Assess. 32(2), 135–154 (1994)Google Scholar
 11.Herman, J.D., Kollat, J.B., Reed, P.M., Wagener, T.: Method of morris effectively reduces the computational demands of global sensitivity analysis for distributed watershed models. Hydrol. Earth Syst. Sci. 17(7), 2893–2903 (2013)Google Scholar
 12.Iooss, B., Lemaître, P.: A review on global sensitivity analysis methods. Uncertainty Management in SimulationOptimization of Complex Systems, pp. 101–122. Springer, Boston (2015)Google Scholar
 13.Jansen, M.J.: Analysis of variance designs for model output. Comput. Phys. Commun. 117(1–2), 35–43 (1999)zbMATHGoogle Scholar
 14.Mach, F.: Reduction of optimization problem by combination of optimization algorithm and sensitivity analysis. IEEE Trans. Magn. 52(3), 1–4 (2016)Google Scholar
 15.Maute, K., Nikbay, M., Farhat, C.: Coupled analytical sensitivity analysis and optimization of threedimensional nonlinear aeroelastic systems. AIAA J. 39(11), 2051–2061 (2001)zbMATHGoogle Scholar
 16.Milanese, M., Norton, J., PietLahanier, H., Walter, É. (eds.): Bounding approaches to system identification. Springer, New York (2013)Google Scholar
 17.Morris, M.D.: Factorial sampling plans for preliminary computational experiments. Technometrics 33(2), 161–174 (1991)Google Scholar
 18.Pei, Y., Davis, M.J., Pickett, L.M., Som, S.: Engine combustion network (ECN): global sensitivity analysis of Spray A for different combustion vessels. Combust. Flame 162(6), 2337–2347 (2015)Google Scholar
 19.Rakopoulos, C.D., Rakopoulos, D.C., Giakoumis, E.G., Kyritsis, D.C.: Validation and sensitivity analysis of a two zone Diesel engine model for combustion and emissions prediction. Energy Convers. Manage. 45(9–10), 1471–1495 (2004)Google Scholar
 20.SAE OnRoad Automated Vehicle Standards Committee. Taxonomy and definitions for terms related to onroad motor vehicle automated driving systems. SAE International, Warrendale (2014)Google Scholar
 21.Saltelli, A., Annoni, P., Azzini, I., Campolongo, F., Ratto, M., Tarantola, S.: Variance based sensitivity analysis of model output design and estimator for the total sensitivity index. Comput. Phys. Commun. 181(2), 259–270 (2010)MathSciNetzbMATHGoogle Scholar
 22.Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Tarantola, S.: Global sensitivity analysis: the primer. Wiley, New York (2008)zbMATHGoogle Scholar
 23.Saltelli, A., Tarantola, S., Campolongo, F., Ratto, M.: Sensitivity analysis in practice: a guide to assessing scientific models. Wiley, New York (2004)zbMATHGoogle Scholar
 24.Sarrazin, F., Pianosi, F., Wagener, T.: Global sensitivity analysis of environmental models: convergence and validation. Environ. Modell. Softw. 79, 135–152 (2016)Google Scholar
 25.Suarez, B., Felez, J., Maroto, J., Rodriguez, P.: Sensitivity analysis to assess the influence of the inertial properties of railway vehicle bodies on the vehicle’s dynamic behaviour. Vehicle Syst. Dyn. 51(2), 251–279 (2013)Google Scholar
 26.Vanrolleghem, P.A., Mannina, G., Cosenza, A., Neumann, M.B.: Global sensitivity analysis for urban water quality modelling: terminology, convergence and comparison of different methods. J. Hydrol. 522, 339–352 (2015)Google Scholar
 27.Wang, S.: Design sensitivity analysis of noise, vibration, and harshness of vehicle body structure. J. Struct. Mech. 27(3), 317–335 (1999)Google Scholar
 28.Xu, W.T., Lin, J.H., Zhang, Y.H., Kennedy, D., Williams, F.W.: Pseudoexcitationmethodbased sensitivity analysis and optimization for vehicle ride comfort. Eng. Optim. 41(7), 699–711 (2009)MathSciNetGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.