1 Introduction

During manufacture, the real product geometry inherently varies from nominal geometry. Additional geometric variations of assemblies arise from variations in interface positions between its components. The product’s geometry is closely related to its functions which have to be translated into geometric requirements of the product. To fulfil these geometric requirements, tolerances restrict the products geometrical variations with respect to the nominal geometry. For specifying suitable tolerances which ensure the products’ functions, tolerance simulations assist the product developer by analyzing the compliance with requirements. The basis for tolerance simulations are mathematical representation models for geometrical variations of the product. Several mathematical models exist for this purpose, such as deviation domain (DD) or Tolerance-Map®(T-Map®)(Ameta et al., 2011).

2 Convex hull techniques

This paper is based on convex hull techniques, such as DD (Giordano et al., 1999) or T-Map® (Davidson et al., 2002), which represent a deviating geometrical element with an abstract deviation space. The models have a lot of similarities, although they differ in detail. Geometrical variations are represented by varying the position and orientation of geometric ideal elements with respect to their nominal position. A plane rectangular feature with associated deviation space is shown in Fig. 1. The ideal feature f is repositioned inside the tolerance zone. The basis for these repositionings is in the deviation domain model the degrees of freedom (DOFs) of the feature, which are restricted by the tolerance. In the T-Map®-approach, the deviating feature is formulated as the linear combination of extreme positions of the feature (Fig. 1):

  1. 1.

    Maximal translation in ±z: vertices on the δz-axis;

  2. 2.

    Maximal rotations around x-axis: vertices on δθ x -axis;

  3. 3.

    Maximal rotations around y-axis: vertices on δθ y -axis.

Fig. 1
figure 1

Rectangular feature with position tolerance and associated convex hull representation. The deviated feature inside the tolerance zone is represented by a point in the abstract deviation space

The methods are named ‘convex hull techniques’ here, because T-Map®’s or DD’s are the convex hull of these extreme positions. As shown in Fig. 1, in convex hull techniques all deviating features which are inside the tolerance zone are represented as points inside an abstract deviation space. A comparison of these techniques can be found in (Mansuy et al., 2013). The techniques were independently developed, but with the same aim and scope. The main difference is that in deviation domains the axes of the domain are the DOFs of the feature, while the axes of a T-Map all have the same dimension (Table 1 of Mansuy et al. (2013)) as this model is mathematically developed from a different point of view. The results for calculating a tolerance stack-up are for both models the same. Some similarities and differences of both models also are discussed in (Ziegler and Wartzack, 2015).

In the following, the basic concepts of convex hull methods are briefly formulated, based on linearized homogeneous 4 x 4 transformations Tlin. Tolerances are formulated with norms ∥ · ∥ as distance measures between deviating points u′ = Tlin · u and the nominal point u. Here, u stands for any point on the geometric element, the feature, f. The formulation of a tolerance in the following is independent of the kind of features (line, plane, etc.). For representing points in the space, homogeneous coordinates are used. In these coordinates a point is represented by u = (u1, u2, u3, 1) whereby the fourth index with the entry 1 is for the simplification of mathematical operations. Points in the abstract deviation space are called small displacement torsors (SDTs), formulated as (x, 0) = (x1, x2, x3, θ1, θ2, θ3) ∈ ℝ3 × [−π, π]3, whereby unrestricted DOFs x i , θ j of the feature are set to 0.

Let f ⊂ ℝ3 × {1} be a bounded, connected parametric surface, curve, or point called “feature”, and u = (u1, u2, u3, 1) ∈ f a point of the feature. An SDT (x, θ) = (x1,x2,x3, θ1, θ2, θ3) defines a transformation

$${T^{\lim }}(x,\theta) = \left({\matrix{1 & {- {\theta _1}} & {{\theta _2}} & {{x_1}} \cr {{\theta _1}} & 1 & {- {\theta _3}} & {{x_2}} \cr {- {\theta _2}} & {{\theta _3}} & 1 & {{x_3}} \cr 0 & 0 & 0 & 1 \cr } } \right),$$
(1)

which maps the feature f to a deviating feature

$$f_{\rm{d}}^{x,\theta } = \{ y \in {\mathbb{R}^3} \times \{ 1\} \vert\exists u \in f\,{\rm{so}}\,{\rm{that}}\,y = {T^{\lim }}u\}.$$
(2)

In the following, Tlin = Tlin (x, θ). A geometric tolerance with tolerance value t > 0 is represented by a constraint of the form

$$\mathop {\max }\limits_{u \in f} \left\Vert {({T^{\lim }} - I)\cdot u} \right\Vert \leq {t \over 2},$$
(3)

where I is the identity matrix. Note that only symmetric dimensional tolerances are considered here. A discussion of a convex hull representation of asymmetric dimensional tolerances can be found in (Roy and Li, 1999). The constraint means that the distance of every point u on f to the deviating point u′ on \(f_{{\rm d}}^{x,\theta}\) should be less than or equal to t/2. The deviation domain D f ⊂ ℝ3 × [−π, π]3 of f with respect to a tolerance is the set of all SDTs of f, for which Eq. (3) holds. A stack-up of parts is represented by a transformation chain \(T_{n}^{{\rm lin}}\cdots T_{1}^{{\rm lin}}=:T_{{\rm sum}}^{{\rm lin}}\) of the features in contact. The stack-up deviation domain Dsum is the domain of possible parameters of \(T_{{\rm sum}}^{{\rm lin}}\). It can be shown that Dsum is the Minkowski sum of the deviation domains \(D_{f}^{i}\), where the Minkowski sum is defined by

$$A + B = \{ x + y\vert x \in A,y \in B\}.$$
(4)

The clearance between two parts p i and p j can be formulated as a domain, too. The clearance domain \(C_{p_{i},p_{j}}\) is the domain of SDTs for position deviation of p i with respect to p j . \(C_{p_{i},p_{j}}\) is generated by constraints, which ensure that there is no collision of p i and p j .

Finally, the set of extremal points uCP of the convex hull ch(f) of the feature f are called “control points” CP (Roy and Li, 1999). Extremal points xA of a convex domain A cannot be represented by a linear combination x = λy + (1 − λ)z, where y, zA and λ ∈ (0, 1). According to the theorem of Caratheodory, uf is the convex combination of a finite number of points \(u_{CP}^{i}\in f\)f, which means that \(u = \sum\nolimits_{i = 1}^n {} {\lambda _i}u_{{\rm{CP}}}^i\) with λ i ≥ 0 and \(\sum\nolimits_{i = 1}^n {} {\lambda _i} = 1\). If for all \(u_{{\rm CP}}^{i}\in {\bf CP}\) the inequality

$$\left\Vert {({T^{\lim }} - I)\cdot u_{{\rm{CP}}}^i} \right\Vert \leq {t \over 2}$$
(5)

holds, then it follows for u with the triangle inequality and the definition of the convex combination

$$\matrix{{\left\Vert {({T^{\lim }} - I)u} \right\Vert = \left\Vert {({T^{\lim }} - I)\sum\limits_{i = 1}^n {{\lambda _i}u_{{\rm{CP}}}^i} } \right\Vert} \hfill \cr {\quad \quad \quad \quad \quad \leq \sum\limits_{i = 1}^n {{\lambda _i}} \left\Vert {({T^{\lim }} - I)u_{{\rm{CP}}}^i} \right\Vert} \hfill \cr {\quad \quad \quad \quad \quad \leq \sum\limits_{i = 1}^n {{\lambda _i}} {t \over 2} = {t \over 2}.} \hfill \cr}$$
(6)

Therefore, inequality (3) can be reduced to a set of control points \(CP = \{ u_{{\rm{CP}}}^1,u_{{\rm{CP}}}^2, \cdots, u_{{\rm{CP}}}^n\}\)

$$\mathop {\max }\limits_{u_{{\rm{CP}}}^i \in {\bf{CP}}} \left\Vert {({T^{\lim }} - I)\cdot u_{{\rm{CP}}}^i} \right\Vert \leq {t \over 2}.$$
(7)

Every control point \(u_{{\rm CP}}^{i}\) defines, by Eq. (5), a domain \(D_{f}^{i}\) in the SDT-space. The intersection of them is the deviation domain \({D_f} = \bigcap\nolimits_{i = 1}^n {D_f^i}\). Based on this (and the simplification of features to polygons), the deviation domains can be constructed. The clearance constraints between two parts analogously are reduced to constraints for control points.

Usually, convex hull techniques are used for worst case tolerance analysis (Ameta et al., 2011). The stack-up domain is compared with a functional domain which represents the functional requirements. This approach has two main weaknesses:

  1. 1.

    The computational effort of the main operation, the Minkowski sum, is highly dependent on the stack-up length and can become very high. An example based on very simple polytopes can be found in (Weibel, 2007).

  2. 2.

    The tolerancing methods based on convex hull techniques have no adequate indicator for identifying the main contributing.

Statistical tolerance analysis methods based on numerical Monte-Carlo (MC) methods avoid the high dependency of the number of arithmetic operations on the tolerance stack-up size. Although there exists a statistical approach based on Tolerance maps®® (Khan et al., 2010), this approach is not capable of identifying the main contributing tolerances for the stack-up.

3 Sensitivity analysis based on convex hull techniques

A suitable definition of sensitivity analysis (SA) can be found in Saltelli et al. (2008): “Sensitivity analysis studies the relationships between information flowing in and out of the model.” According to Stuppy and Meerkamm (2009), there are three methods for common sensitivity or contributor analysis (both terms are used similarly in the following) in tolerancing: arithmetical contributor analysis, statistical contributor analysis, and high-low-median (HLM) sensitivity analysis. The main disadvantage of these methods is their narrowness to 1D tolerance stack-ups. Additionally, these are local SA-methods, meaning they are not capable of taking into account the whole domain of the input parameters. So they cannot take into account effects which arise for all kinds of nonlinear model relations.

3.1 Variance-based sensitivity analysis

In contrast, global sensitivity analysis methods are capable of taking into account nonlinear model relations. There are a lot of different global sensitivity analysis methods; some can be found in (Saltelli et al., 2008). We focus here on variance-based sensitivity measures, as they are widely used and therefore well developed and understood.

Global variance-based sensitivity measures were introduced 40 years ago by Cukier et al. (1973) and extensively discussed at the beginning of the 1990s by Sobol’ (1993). The basis for the variance-based SA is the high dimensional model representation (HDMR)

$$\matrix{{f(x) = {f_0} + \sum\limits_{i = 1}^n {} {f_i}({x_i}) + \sum\limits_{i = 1}^n {} \sum\limits_{j > i}^n {} {f_{ij}}({x_i},{x_j})} \hfill \cr {\quad \quad \quad + \cdots + {f_{12 \ldots n}}({x_1},{x_2}, \ldots {x_n}),} \hfill \cr}$$
(8)

a decomposition of a squared integrable function f(x1, x2, …, x n ) which represents the simulation model, where x = (x1, x2, …, x n ) ∈ [0, 1]n. From the HDMR of f(x), it can be shown that the variance of f(x) decomposes to

$$V = \sum\limits_{i = 1}^n {} {V_i} + \sum\limits_{i = 1}^n {} \sum\limits_{j > i}^n {} {V_{ij}} + \cdots + {V_{12 \ldots n}},$$
(9)

where f 0 is the mean of f on [0, 1]n, V = Var(f) is the variance of f, and

$${V_i} = \int {f_i^2} ({x_i}){\rho _i}({X_i} = {x_i}){\rm{d}}{x_i},$$
(10)
$$\matrix{{{V_{ij}} = \int\!\!\!\int {f_{ij}^2} ({x_i},{x_j}){\rho _i}({X_i} = {x_i}){\rho _j}({X_j} = {x_j}){\rm{d}}{x_i}{\rm{d}}{x_j},} \hfill \cr \cdots \hfill \cr}$$
(11)

over the input parameter space with independent probability density functions ρ(X i ). Note that Var(f0) = 0, as f0 is constant. The variance decomposition in Eq. (9) is called analysis of variance HDMR (ANOVA-HDMR). The term V i describes the part of Var(f), which can be reduced to variations of x i . The higher terms V ij describe the part of Var(f), which additionally arise from varying x i and x j together. Terms of higher order are analogous. Based on the variance-decomposition, the following two sensitivity indices are calculated:

$${S_{{M_i}}} = {{{V_i}} \over V},$$
(12)
$${S_{{T_i}}} = {1 \over V}\left({{V_i} + \sum\limits_{\matrix{{j = 1} \hfill \cr {j \ne i} \hfill \cr } }^n {{V_{ij}}} + \sum\limits_{\matrix{{j = 1} \hfill \cr {j \ne i} \hfill \cr } }^n {} \sum\limits_{\matrix{{k = 1} \hfill \cr {k \ne i} \hfill \cr {k \ne j} \hfill \cr} }^n {{V_i}_{jk}} + \cdots + {V_{12 \ldots n}}} \right).$$
(13)

\(S_{M_{i}}\) is called the main effect of x i and measures the direct influence of x i on f, while \(S_{T_{i}}\) is called the total effect and additionally considers interactions of x i with other input parameters.

There exist two different algorithmic approaches to calculate main and total effects: Monte-Carlo methods (Sobol’ and Jansen algorithm), which estimate the indices with a random number sampling, and spectral methods (extended Fourier analysis sensitivity test and random balanced design), based on a periodic sequence (Saltelli et al., 2008). Here, Monte-Carlo methods are used.

In contrast to other global sensitivity measures (Borgonovo, 2007; Roustant, 2013), variance-based methods are well established. Also, efficient algorithms for estimating the indices exist. Additionally, f only has to be squared integrable (in contrast to derivative-based SA where the derivative of f must exist), which makes the method capable of analyzing overconstrained systems in tolerancing, where discontinuous system behavior can happen. Furthermore, the main and total effects also can be calculated if the model cannot be formulated as a function f. Then, \(S_{M_{i}}\) and \(S_{T_{i}}\) can be expressed in terms of conditional variances (Saltelli et al., 2008). Therefore, regularity issues of the simulation model are neglected here.

Partially, variance-based SA is used for tolerance analysis in recent publications (Schleich and Wartzack, 2013; Walter et al., 2013). However, these methods consist of simulation models with input parameters. In tolerancing based on convex hull techniques, the input of the tolerance stack-up are domains. Thereby, the following question arises: “How to quantify the contribution of a domain to another domain?” However, this can only be answered by keeping in mind the input of the convex hull methods.

3.2 Deviation characteristic

Convex hull based models commonly are not formulated as simulations with ‘single independent’ input parameters and output parameters. Variance-based SA algorithms are separate modules, to analyze a simulation. The aim of sensitivity analyses is to derive recommendations for regulating variables. In variance-based SA, these regulating variables are parameters. Additionally, to measure the variance of the output, the output also has to be a parameter. Therefore, variance-based SA needs input and output parameters. In tolerancing, a suitable framework for variance-based SA can be the following one: “Changing the tolerance values, while the tolerance scheme remains”.

In this context, the tolerance values have to be transferred into deviating geometry, which is assembled and the output is measured. The basis for this transfer is the deviation characteristic λ(x, θ) of a SDT (x, θ) with 0 ≤ λ(x, θ) 1 (Ziegler and Wartzack, 2013). The characteristic measures the quality of a features deviation with respect to the adopted tolerance. λ is defined as follows:

$$\lambda (x,\theta) = {2 \over t}\mathop {\max }\limits_{u \in f} \left\Vert {({T^{\lim }}(x,\theta) - I)\cdot u} \right\Vert.$$
(14)

For λ =1 the deviating feature is just inside the tolerance zone, for λ = 0 the feature is nominal. In the deviation domain framework, this means for (x, θ) ∈ D f that λ(x, θ)= 0 is equivalent to (x, θ) = (0, 0). Additionally, λ(x, 0) = 1 is equivalent to (x, θ) ∈ D f . A visualization for two dimensions is shown in Fig. 2. For the SA algorithm the probability density function ρ for a tolerance with tolerance value t > 0 is equally distributed in [0, t].

Fig. 2
figure 2

A deviating 2D line (oblique line left) and the associated SDT (circle right). The deviation characteristic is λ = t′/t

3.3 Assemblability measure

As mentioned in Section 3.2, the SA algorithm requires an output parameter. In the deviation domain approach, the assemblability output is a resulting clearance domain \(C_{p_{i},p_{j}}\). A characteristic parameter is the volume \(\vert C_{p_{i},p_{j}}\vert:={\rm Vol}(C_{p_{i},p_{j}})\) of \(C_{p_{i},p_{j}}\). Although the domain volume is not used in the DD approach, it is used in T-Maps® (Ameta et al., 2011). However, the ‘relative’ clearance domain volume \(\vert C_{p_{i},p_{j}}^{{\rm dev}}\vert/\vert C_{p_{i},p_{j}}^{{\rm nom}}\vert\) in the DD approach is similar to the T-Map® approach. \(C_{p_{i},p_{j}}^{{\rm dev}}\) is the clearance domain for a given set of SDTs ((x1, θ1), (x2, θ2),…, (xn, θn)), while \(C_{p_{i},p_{j}}^{{\rm nom}}\) is the nominal clearance domain where all SDTs are (0, 0). Therefore, the relative clearance domain volume is the output parameter for assemblability studies.

3.4 Computational method

The computational application of assemblability studies is structured as shown in Fig. 3. In the first step, the sampling of N deviating parts has to be created. Therefore, a big sample of SDTs is created and scaled. From them, N SDT-samples of every feature are chosen according to a Latin hypercube sampling (LHS) (see Section 3.4.2). This sampling is then used to perform the sensitivity analysis. In Fig. 3, Sobol’ algorithm (Saltelli et al., 2008) is used to estimate the sensitivity indices. In the step ‘recombine sampling’, the SDT-sampling is separated into two samples of equal size. They are then recombined, and following the tolerance simulation is performed. The set of SDTs define transformation matrices, which are used to replace collision control points. Finally, the resulting clearance domain volume is estimated with another Monte-Carlo sample (see Section 3.4.3), for which the control points are tested for collision. The SA algorithm analyzes the relation between relative clearance domain volume and tolerance values, without taking into account detailed information about the tolerance simulation.

Fig. 3
figure 3

Flowchart of the SA approach for assemblability studies, performed with the Sobol’ algorithm

3.4.1 Conditions

For performing the method, the following conditions must hold:

  1. 1.

    For every feature only one geometric tolerance is adopted.

  2. 2.

    There should be not many unmountable assemblies. This is a consequence of the relative clearance domain: for a clearance domain with \(\vert C_{p_{i},p_{j}}\vert=0\) no variance appears, so the variance-based SA cannot measure the influence of any input.

If both conditions are fulfilled, the method can be applied. These conditions result from the actual status of the sensitivity analysis method for assemblability studies.

3.4.2 Sampling of deviation characteristics and SDT’s

The tolerance values are generated with an LHS (McKay et al., 1979), a common sampling method for computer experiments. The LHS can be easily created and shows better results for a fixed sample size than a plain Monte-Carlo sample. In our case, the LHS is created by a selection process from a high number of SDT-samples.

LHS is a so-called stratified sampling, in which the unit cube [0, 1]n is evenly filled by sampling points. In an LHS, for generating N samples of n input parameters the range of every input parameter is divided into N cells with probability 1/N. For every input parameter, in every cell a sample is placed. The samples of the input parameters are now combined following a Latin square. In Fig. 4, a sampling for two tolerances with N = 7 samples is seen. On the upper left, the tolerance characteristics are shown, which are arranged with an LHS. Characteristic of an LHS is that every row and column of the grid contains only one sample. The tolerances are both position tolerances, one for a plane and one for an axis, as seen in Fig. 4 on the lower right. The tolerance on the plane restricts a translation and a rotation (lower left), and the tolerance on the axis restricts two translations (upper right). As the tolerance on the axis has a diameter symbol, the tolerance zone is circular.

Fig. 4
figure 4

LHS of a position tolerance of a plane λ 1 and a position tolerance of an axis λ 2 in 2D. The DOFs for λ 1 are a translation and a rotation, for λ 2 two translations

In the next step, an SDT sample is generated for every input parameter λ i . The cells of the LHS are transformed to layers of the deviation domains. A sample is chosen in every layer of the domain. In Fig. 4, the two deviation domain layers for λ1 and λ2 are shown. Each sample has a separate form. λ1 represents the 2D parallelism tolerance zones, while λ2 represents the position tolerance zone of the axis.

3.4.3 Clearance-approximation

The size of the clearance domain \(C_{p_{i},p_{j}}^{{\rm dev}}\) has to be approximated for every sample of deviating SDT’s. A simple way to do this is to approximate \(\vert C_{p_{i},p_{j}}^{{\rm dev}}\vert\) with another Monte-Carlo sample (Sobol’, 1994). The idea of Monte-Carlo estimation of a multidimensional volume is a very simple one. A is a bounded domain inside a hyperrectangle H = [a1, b1] × [a2, b2] × … × [a n , b n ].For N Monte-Carlo samples equally distributed in H, the ratio \({\textstyle{{N^{\prime}} \over N}}\vert H\vert\) where N′ is the number of samples which are inside A converge against |A|. An example is seen in Fig. 5.

Fig. 5
figure 5

Approximation of the the volume of A with a Monte-Carlo sample inside H. Samples inside A are rhombuses, samples outside A are crosses

For approximating the relative clearance volume, first the domain which is part of at least one possible clearance domain

$$C_{{p_i},{p_j}}^{\max } = \bigcup\limits_{(({x^1},{\theta ^1}), \ldots ({x^n},{\theta ^n})) \in D_f^1 \times \cdots \times D_f^n} {C_{{p_i},{p_j}}^{{\rm{dev}}}},$$
(15)

where \(C_{p_{i},p_{j}}^{{\rm dev}}=C_{p_{i},p_{j}}^{{\rm dev}}((x^{1},\theta^{1}),\ldots,(x^{n},\theta^{n}))\), is derived. Next, the smallest hyperrectangle H with \(C_{p_{i},p_{j}}^{\max}\subset H\) is specified. Finally, the nominal clearance domain is calculated. For approximating the relative clearance domain volume, the volume |H| is not important, as

$${{\vert C_{{p_i},{p_j}}^{{\rm{dev}}}\vert} \over {\vert C_{{p_i},{p_j}}^{{\rm{nom}}}\vert}} \approx {{{\textstyle{{{N^{{\rm{dev}}}}} \over N}}\vert H\vert} \over {{\textstyle{{{N^{{\rm{nom}}}}} \over N}}\vert H\vert}} = {{{N^{{\rm{dev}}}}} \over {{N^{{\rm{nom}}}}}},$$
(16)

where Ndev are the MC samples which are inside \(C_{p_{i},p_{j}}^{{\rm dev}}\) and analogous for Nnom. It is very important to set the number of clearance samples N as small as possible, as it has a high influence on the computational costs of the sensitivity analysis.

4 Application example

To show the approach’s practical use, it is applied on a pin-hole assembly, as detailed in Fig. 6. As there is only one geometric tolerance on pin and hole, condition 1 is fulfilled.

Fig. 6
figure 6

Geometry of pin & hole, coordinate system and a plausible assembly situation of deviating pin & hole

4.1 Pin-hole connection

Both parts consist of two cylindrical surfaces, with diameter tolerances and a coaxiality tolerance between them. The remaining parameters are fixed for the analysis, according to their small influence on the clearance. The tolerance values are set to t i = 1 for i = 1, 2, 3.

Although the maximum material condition for the pin-hole connection would ensure assemblability, for cases with more parts this does not have to be the case. The application example should demonstrate the approach, which is of practical use for assemblies where the maximum material condition is not sufficient to ensure assemblability.

4.2 Sample estimation

For receiving numerical robust results, in MC estimation the necessary sample size is an important factor. The number of samples should be as small as possible and as large as necessary.

4.2.1 Clearance-approximation

The two pin-cylinders have control points on their edges, which are used for collision control of the pin with the hole-part. In a first step, scaling of the clearance domain with respect to the chosen tolerance values has to be analyzed. Fig. 7 shows how many of 1000 Monte-Carlo samples of the SA algorithm accept the pin-positions with respect to the clearance samples. Because of the high amount of accepted samples in a nominal position, the assembly fulfills condition 2 and therefore is suitable for the SA method. The number of clearance samples was varied between 100 and 1000. Robust results were found for 200 or more samples, so it was set to 200.

Fig. 7
figure 7

Estimation of the clearance over 1000 sensitivity analysis samples. Size and color of the balls show the number of samples for which the position is accepted

4.2.2 Deviation characteristics

With a robust clearance estimation, the simulation gives reliable results. For the sampling, all deviation characteristics are determined to be evenly distributed. Next, the sample size for the SA algorithm has to be defined. The authors’ experience showed that sensitivity analysis of tolerances should be performed with at least a few thousand samples to have acceptable approximation errors. For evaluating the results, according to the sample size, a relative error \({\rm Err}_{N_{1},N_{2}}\) between two sample sizes N1 and N2 is measured for main and total effects of n tolerances:

$$\matrix{{{\rm{Err}}_{{N_1},{N_2}}^{{S_M}} = \sum\limits_{i = 1}^n {} \left\vert {S_{{M_i}}^{{N_1}} - S_{{M_i}}^{{N_2}}} \right\vert,} \hfill \cr {{\rm{Err}}_{{N_1},{N_2}}^{{S_T}} = \sum\limits_{i = 1}^n {} \left\vert {S_{{T_i}}^{{N_1}} - S_{{T_i}}^{{N_2}}} \right\vert.} \hfill \cr}$$
(17)

The sum of the absolute difference between single main and total effects is a rather conservative estimate, as all variations of the indices are summed. Therefore the authors decided to reach a bound of 0.1 for both error measures, which is relatively strong as it is the sum of six sensitivity index variations. In Fig. 8 it can be seen that this bound reaches between 10 000 and 20 000 samples. Therefore, the results for 10 000+ samples are evaluated as credible.

Fig. 8
figure 8

Relative errors \({\bf Err}_{N_{1},N_{2}}\) for main and total effect, the x -axis denotes N 1 and N 2

4.3 Results

The results for a sensitivity analysis with 10 000 deviation characteristic combinations and 200 samples for the clearance approximation can be found in Fig. 9. The parameter d designates a diameter tolerance, co a coaxiality, h a tolerance in the hole part, and p a tolerance on the pin. Furthermore, l designates the left cylinder, and r the right cylinder. According to their higher range of tolerance (±1 mm in contrast to 1 mm), the 4D tolerances have higher influence than the coaxiality tolerance. All parameters show a big difference between main and total effect sensitivities. This is an indicator for strong interactions. Especially the coaxiality has a very small main effect, which indicates that coaxiality deviations have significant influence only in combination with other deviations on the clearance of the pin. The dimensional tolerances of the right cylinders in pin and hole show a higher influence on the clearance than the left ones, both for main and total effects. This can be explained by the higher influence of small rotations for the higher length/diameter ratio of \(d_{h}^{r}\) and \(d_{p}^{r}\).

Fig. 9
figure 9

Main and total effects for 10 000 SA samples and 200 clearance samples

5 Discussion and conclusions

The results for the pin-hole connection show the necessity of global sensitivity analysis in tolerancing based on convex hull techniques, according to the high difference between main and total effects. This is a good motivation to investigate further to develop the method, especially since the conditions of Section 3.4 limit the method to special cases.

The proposed method needs further analysis. There are more efficient volume estimators for lower dimensional spaces. Additionally, the sampling method for the deviating parameters is a “trial and error” method, which is computationally inefficient. The method has shown its suitability for a 2D model. According to the calculation runtime of 1 to 30 min of the model, the computational costs should be reduced to achieve an acceptable runtime for 3D models.

Motivation for developing the proposed SA method was the realization, through contact with practitioners, that the convex hull methods are hard to understand even for tolerance experts in industry. This is crucial for a method in engineering which should being used in practice after 15 years of research development, since the first detailed study of Roy and Li (1999). One possibility to extend the proposed method is to consider reliability SA methods as in (Lemaitre et al., 2015) and combine it with the deviation characteristic concept. The relative clearance domain volume in this case should be replaced by the probability of failure, which has to be estimated numerically as in (Beaucaire et al., 2013). Therefore, this paper is a first step towards closing the gap between norm-conform tolerance simulations based on convex hull methods on the one hand and the necessity for recommendations for decision making processes on the other.