1 Introduction

Due to the imprecision associated with the manufacturing process, it is not possible to attain the theoretical dimensions associated with the process in a repetitive manner. It may cause a degradation of the product performance. To ensure the desired behavior and the performance of the system in spite of manufacturing imprecisions, the component characteristics are assigned a tolerance zone within which the value of the characteristics, i.e., situation and intrinsic, lie.

Moreover, technology is always increasing and performance requirements continually tighten, and also the costs and the required precision of assembly operations increase. More attention is needed in tolerance design to be able to manufacture high-precision assemblies at lower costs. Therefore, tolerance analysis is a key element in the manufacturing industry to improve product quality and to decrease the manufacturing costs (Qureshi et al., 2012).

Tolerance analysis concerns the verification of functional requirements after tolerances have been specified on each isolated part. Analysis methods are divided into two categories (Dantan and Qureshi, 2009):

  1. 1.

    Displacement accumulation whose goal is to model the influences of the deviations on the geometrical behavior of the mechanism. This relationship uses the following form: Y = f (X, G) (Nigam and Turner, 1995), where Y is the response of the system (a characteristic such as a gap or a functional characteristic), X is the vector of deviation values (situation or/and intrinsic deviations) of parts composing the mechanism, and G is the vector of gaps between the parts of the mechanism. The function f represents the deviation accumulation of the mechanism; it can be an explicit analytical expression, an implicit analytical expression, or a numerical simulation. The difficulty in determining the function f increases with the complexity of the studied system (Mhenni et al., 2007; Ballu et al., 2008). In the case of analytical formulation, the mathematical formulation of the behavior model can be written from topological loop relations of the system as well as constraints to prevent surfaces from penetrating into others. In addition, the analysis method must consider the worst admissible configuration of gaps which is realized using an optimization algorithm. However, non-linear analytical models are difficult to handle in an optimization problem. Indeed, Qureshi et al. (2012) noticed that some optimization results were not admissible because constraints were violated. Non-linear optimization algorithms may not find the global minimum which is required to consider the worst case scenario. In addition, using genetic evolutionary algorithms might be a solution, but these algorithms are very time-consuming and it will be irrelevant to use within numerical simulations, such as Monte Carlo (MC) simulations. This is why a linearization of non-linear equations is required.

  2. 2.

    Tolerance accumulation aims at simulating the composition of tolerances (i.e., linear tolerance accumulations, 3D accumulations) (Ameta et al., 2011). The admissible deviations are mapped using several vector spaces in a region of hypothetical parametric space. The relevant literature mentions several techniques to represent geometrical tolerances or dimensioning tolerances, among which are T-maps (Davidson et al., 2002; Bhide et al., 2005; Davidson and Shah, 2012), gap spaces (Zou and Morse, 2003; Morse, 2004), and deviation domains (Giordano and Duret, 1993; Giordano et al., 2005). All these methods require mathematical tools, such as the Minkowski sums and the intersection of domains (Mansuy et al., 2011), so as to compute the quality level of the product. To be able to perform these operations, behavior models must be in a linear form.

Both categories need a linearization of the behavior model to compute the quality levels of the product. In fact, tolerance analysis is totally dependent on the models chosen to describe the system behavior.

This study focuses on the impact of linearization of the behavior model on the statistical tolerance analysis which is based on the displacement accumulation theory. In particular, the impact of the strategy of linearization is studied. To manage the probability of out-tolerance products and evaluate the impact of component tolerances on product performance, designers need to simulate the influences of manufacturing imprecisions with respect to the functional requirements. In this case, the goal of tolerance analysis is to predict a quality level during the design stage, the technique consists of computing two probabilities of failure, one relative to the assembly Pfa and one relative to the functionality Pf.

This paper intends to show that the linearization procedure has an impact on the probability of failure accuracy and presents an adaptive algorithm able to compute accurate probability faster than the classical solution method.

2 Statistical tolerance analysis of over-constrained mechanisms

We can distinguish three main issues in tolerance analysis (Dantan et al., 2012; Qureshi et al., 2012):

  1. 1.

    The models representing the geometrical deviations;

  2. 2.

    A mathematical model of the geometrical behavior with deviations;

  3. 3.

    The development of the analysis methods.

This study focuses on the impact of the second issue on the third issue. The goal of this section is to describe the current method to solve a tolerance analysis problem, from the formulation to a solution method. Section 2.1 deals with the mathematical formulation of a statistical tolerance analysis problem. Section 2.2 shows the analysis method used to compute the probabilities of failure.

2.1 Formulation of a tolerance analysis problem

Most mechanisms have gaps between parts which make the behavior model more complex to be modeled. Gaps are considered as free variables, they depend on the geometrical deviations and on the parts configuration of the mechanism (Qureshi et al., 2012). The behavior model must take into account those gaps by limiting their displacement to avoid the interpenetration of one surface into another. Boundaries are therefore defined using interface constraints written as

$${\left\{ {C_{\rm{i}}^{(k)}(X,G) \leq 0} \right\}_{k = 1, 2, \ldots {N_{{C_{\rm{i}}}}}}},$$
(1)

where \(N_{C_{\rm i}}\) is the number of interface constraints. The mechanical behavior of the mechanism is modeled using compatibility equations, these are the composition relations of displacements in the various topological loops. The set of equality equations provides a linear system to be satisfied. They are written as

$${\left\{ {C_{\rm{c}}^{(k)}(X,G) \leq 0} \right\}_{k = 1, 2, \ldots {N_{C{\rm{c}}}}}},$$
(2)

where \(N_{C_{\rm c}}\) is the number of compatibility equations.

The formulation of both requirements is based on quantifiers. The assembly requirement is given by Qureshi et al. (2012): “For all admissible deviations, there exists a gap configuration such as the assembly requirements and the behavior constraints are verified”. The assembly probability of failure Pfa is then given in Eq. (3). The technique to compute this probability is to minimize one interface constraint subject to all compatibility equations and interface constraints to be satisfied. A solution means that the assembly is possible, on the contrary the assembly is impossible if no solution exists.

((3))

The functional requirement is also defined by Qureshi et al. (2012): “For all admissible gap configurations, the geometrical behavior and the functional requirements are verified”. The functional requirement implies that the functional characteristic Y is not to exceed a threshold value Yth. A functional condition is defined, C f = YthY ≥ 0, a positive value ensures the mechanism is functional. However, all configurations do not have to be checked, indeed, to compute the functional probability Pf of failure, it is necessary to find at least one admissible configuration where the functional condition is not respected:

$${P_{\rm{f}}} = {\rm{Prob}}(\exists {G_{{\rm{adm}}}} \in {{\mathbb{R}}^m}\vert{C_{\rm{f}}}(X,G) \leq 0).$$
(4)

where Gadm are gap values verifying all interface constraints and compatibility equations. This corresponds to the worst gap configuration to be found; this configuration provides the worst value of the functional condition. The technique to find the worst admissible configuration of gaps for the functional condition is to minimize Cf subject to all constraints, as shown in Eq. (5).

$$\matrix{{{P_{\rm{f}}} = {\rm{Prob(}}\mathop {\min }\limits_G {C_{\rm{f}}}(X,G) \leq 0),} \hfill \cr {{\rm{with}}\,\,{C_{\rm{c}}}(X,G) = 0\,\,{C_{\rm{i}}}(X,G) \leq 0.} \hfill \cr}$$
(5)

2.2 Solution method based on Monte Carlo simulation and optimization

The classic solution method combines a Monte Carlo simulation and an optimization algorithm (Dantan and Qureshi, 2009; Qureshi et al., 2012). The functional condition is linear. In addition, all constraints are linear, if not, the linearization procedure proposed in Section 3 is applied. An optimization scheme using a simplex technique is therefore chosen to solve the optimization problems. The different steps of the solution procedure are described below:

  1. 1.

    Defining the Monte Carlo population: a set of N samples x = {x(1), x(2),…, x(N)} from the random vector X is created;

  2. 2.

    Launching the optimization algorithm for each sample;

  3. 3.

    Estimating Pfa and/or Pf, the probabilities of failure are estimated using the following equations:

    $${\tilde P_{{\rm{fa}}}} = {1 \over N}\sum\limits_{i = 1}^N {} {I_{{{\rm{D}}_{{\rm{fa}}}}}}({x^{(i)}}),$$
    (6)
    $${\tilde P_{\rm{f}}} = {1 \over N}\sum\limits_{i = 1}^N {} {I_{{{\rm{D}}_{{\rm{fa}}}}}}({x^{(i)}}),$$
    (7)

    where ID(X) is the indicator function; for the assembly requirement the function is:

    ((8))

For the functional requirement, it is defined as

((9))

The number of samples N can be defined to yield a coefficient of variation, see Eq. (10), lower than a given accuracy, usually lower than at least 10%.

$${\rm{CO}}{{\rm{V}}_{{P_{\rm{f}}}}} = \sqrt {{{1 - {P_{\rm{f}}}} \over {N{P_{\rm{f}}}}}.}$$
(10)

3 Illustration of the linearization procedure

This section shows the different considered linearization strategies. Section 3.1 presents a simple mechanism to show what type of non linear equations needs to be linearized. Section 3.2 describes the mathematical linearization procedure on such non linear constraints.

3.1 Illustration on 2D mechanism

The example in Fig. 1 is a simplified version in 2D of the industrial mechanism shown in Fig. 6. This mechanism is constituted of two pins, part (1), in relative displacement with part (2). Both pins are fixed in a workpiece. Geometrical deviations are considered in this mechanism (Qureshi et al., 2012):

  1. 1.

    Intrinsic deviations: diameters of pins and their pin-holes: d1a, d2a, d1b, and d2b.

  2. 2.

    Situation deviations: orientation and position variations of the substitute surfaces a and b with respect to their respective nominal surfaces. The deviation of the substitute surface a of part (1) with respect to its nominal surface is written u1a1 and v1a1 (translation along the x-axis and y-axis) and γ1a1 (rotation around the z-axis).

Fig. 1
figure 1

Representation of the simplified 2D hyperstatic mechanism with a side view

Small displacement torsors (Bourdet and Clément, 1976; Legoff et al., 2004) are used to represent gaps between the joints of the mechanism. Two torsors are built to model variations in orientation and location of each pin. These torsors are written as {G1a/2a}A and {G1b/2b} B where “1a/2a, A” means these are the variations of the surface a of part (1) with respect to surface a of part (2) applied to point A, and expressed as

((11))

The geometrical behavior of this mechanism is made up of three compatibility equations Cc (X, G) = 0 and two interface constraints Ci(X, G) < 0 characterizing the non-interpenetration of the pins in their pin-holes. These constraints are given by the quadratic Eqs. (12) and (13).

$$C_{\rm{i}}^{(1)} = u_{1a2aA}^2 + v_{1a2aA}^2 - {\left({{{{d_{2a}} - {d_{1a}}} \over 2}} \right)^2} \leq 0,$$
(12)
$$C_{\rm{i}}^{(2)} = u_{1b2bB}^2 + v_{1b2bB}^2 - {\left({{{{d_{2b}} - {d_{1b}}} \over 2}} \right)^2} \leq 0,$$
(13)

with d1a < d2a and d1b < d2b.

These inequations specify an admissible displacement area of the center A and B of both pins to not penetrate part (1) (Fig. 2). Linearization has to occur on such constraints to simplify the optimization scheme. Indeed, handling non linear constraints is more difficult than linear constraints. In addition, both Eqs. (12) and (13) are similar, the linearization procedure is therefore illustrated on a reference equation, shown in Eq. (14).

Fig. 2
figure 2

Admissible area of displacement of pin center A to satisfy the interface constraints of joint 1 a /2 a

3.2 Illustration of linearization strategies on constraints

This section describes different strategies for the linearization of the non linear equations. These equations result from a cylinder type joint and provide quadratic interface constraints. For a simple 2D circle, the quadratic equation to be linearized is written as

$${C_{\rm{i}}}(u,v) = {u^2} + {v^2} - \Delta {R^2},$$
(14)

where ΔR is the circle radius difference between u and v, representing the displacements along the x-and y-axis.

Linearization corresponds to a discretization of the admissible area of displacement, which is a 2D circle, into a polygon whose number of facets depends on the number N d of discretizations. Several discretization strategies are considered (Fig. 3). Given Ci(u,v) = u2 + v2 − ΔR2, an interface constraint, the linearization operation provides new inequations depending on the type of linearization:

  • Type 1: Discretization following an inner polygon:

    $$C_{\rm{i}}^{(k)}(u,v) = u\cos {\theta _k} + v\sin {\theta _k} - \Delta R\cos {{{\theta _1}} \over 2} \leq 0.$$
    (15)
  • Type 2: Discretization following an outer polygon:

    $$C_{\rm{i}}^{(k)}(u,v) = u\cos {\theta _k} + v\sin {\theta _k} - \Delta R \leq 0,$$
    (16)

    where θ k = 2/N d , for k = 1, 2, …, N d , is an angle whose parameter N d enables the number of linearizations to be adjusted, and so θ1 = 2π/N d .One interface constraint becomes N d interface constraints, increasing significantly the number of constraints, but these constraints have the advantage of being linear in displacement.

Fig. 3
figure 3

The two types of discretization of the real admissible area of displacement, here with six facets

4 Confidence interval based algorithm for the assembly requirement

In this section, an algorithm is proposed to compute an accurate probability faster than the current solution method. The procedure is based on a Monte Carlo simulation and takes advantage of the confidence interval on the probability of failure obtained by combining both linearization strategies for a given number of linearizations. This algorithm is tested and compared with the classical Monte Carlo method in Section 5.

The assembly requirement has two interesting properties that are used in the proposed algorithm. First, considering the inner strategy, if a point enables the mechanism to be assembled with a given number of linearizations Nd1, then it will always be the case with a greater number of linearizations Nd2 > Nd1 on the condition that the edges of each polygon coincide (Fig. 4). So to improve the probability accuracy with this strategy, it is sufficient to evaluate again, with a greater number of linearizations, only the non-assembly points. As for the second property, for a given number of linearizations N d , the entire set of non-assembly points with the outer strategy is included in the set of the non-assembly points with the inner strategy. It means that to compute the probability of failure with the outer strategy, it is sufficient to only evaluate the non-assembly points with the inner strategy.

Fig. 4
figure 4

Two inner polygons with different number of linearizations: N d 1 = 6, N d 2 = 18

Furthermore, all non-assembly points found with the outer strategy will always be whatever a greater value N d . Fig. 5 shows a linearization of a real area of displacement with both strategies, uncertain points can therefore be defined: these are points between the inner and the outer polygons.

Fig. 5
figure 5

Two polygons with the inner and outer strategies for the same number of linearizations

The goal of the algorithm is to achieve a classical Monte Carlo simulation with a small number of linearizations N d . The probability of failure with the inner strategy is first computed. By computing again only non-assembly points from the previous simulation with the outer strategy, the probability of failure for this case can be quickly evaluated. The relative confidence interval, see Eq. (17), between both probabilities can be computed; if this interval is small enough, then the procedure ends, otherwise the number of linearizations is increased and the procedure is launched again only on uncertain points. The algorithm steps are described below:

  1. 1.

    Set a small number of linearizations, e.g., N d 0 = 4.

  2. 2.

    Achieve a Monte Carlo simulation with the inner strategy: compute Pfa_inner.

  3. 3.

    Save non-assembly points from the Monte Carlo simulation (points outside the inner polygon).

  4. 4.

    Perform a Monte Carlo with the previous saved points with the outer strategy: compute Pfa_outer.

  5. 5.

    Save uncertain points: non-assembly points with the inner strategy and assembly points with the outer strategy (points outside the inner polygon and into the outer polygon).

  6. 6.

    Compute the relative confidence interval (R-CI):

    $${\rm{RCI}} = {{{P_{{\rm{fa}}}}{{_\_}_{{\rm{inner}}}} - {P_{{\rm{fa}}}}{{_\_}_{{\rm{outer}}}}} \over {{P_{{\rm{fa}}}}{{_\_}_{{\rm{inner}}}}}}.$$
    (17)
  7. 7.

    Compare RCI to a precision criterion, e.g., 5%. If RCI is greater than the criterion, then start the while loop with a greater number of linearizations N d : Nd1 = 3N d 0.

  8. 8.

    Do while loop using the uncertain points.

  1. (a)

    Compute Pfa_inner and save non-assembly points among the uncertain points.

  2. (b)

    Compute Pfa_outer and update the list of uncertain points.

  3. (c)

    Compute RCI.

  4. (d)

    If RCI < 5%, then stop, else Nd(k+1) = 3N dk .

The RCI is preferred to the confidence interval (CI) because this criterion is independent of the order of magnitude of the probability, so the stopping criterion can correspond to a percentage. This algorithm is tested in the next section for an industrial application.

5 Estimation of the probability of assembly failure on an industrial mechanism

The study of the impact and the application of the proposed procedure are applied on a test case based on a Radiall electrical connector (Fig. 6). The mechanism is composed of two parts which must be assembled, and they are positioned by five cotters and two pins which make the mechanism highly over-constrained. The behavior model is not detailed in this paper because of too many equations:

  1. 1.

    Seventy-two compatibility equations;

  2. 2.

    Forty linear interface constraints;

  3. 3.

    Four quadratic interface constraints;

  4. 4.

    Thirty-eight random variables;

  5. 5.

    Seventy-five gap variables.

Fig. 6
figure 6

Illustration of the Radiall connector

All parameter values: dimensions, means, standard deviations, and probability laws, are necessary for tolerance analysis. However, for confidentiality reasons, tolerances and values are changed from the real values for this study. Random variables are defined following a normal distribution. For this test case, only the assembly requirement is evaluated. The probability Pfa will only be computed.

A range of the linearization number is defined and Monte Carlo simulations are performed on each case. The number of samples N is chosen to yield a coefficient of variation on the probability of failure (Eq. (10)) about 6%.

Results are given in Table 1, which show that the number of linearizations N d and the strategy influence the probability of failure values. Strategies provide different results which converge toward the same value, validating the linearization equations. Considering a target probability to be reached, the inner polygon strategy provides conservative results (only valid for the assembly case). In this case, even if the result is an approximation, the real probability value will not be underestimated because this strategy overestimates the probability. Furthermore, the combination of the inner and outer polygon strategies provides a CI for the true failure probability value. The greater the number of linearizations, the smaller the CI.

Table 1 Probabilities of failure values for the assembly requirement obtained with the Monte Carlo simulation and the proposed procedure

The computing time increases with the number of linearizations. In addition, it is not possible to know a priori for which the linearization number value for the probability of failure has converged. To avoid doing blinded simulations, it is very important to dispose of a procedure that is able to provide accurate results as quickly as possible. The proposed procedure in Section 4 solves this issue. Indeed, the procedure is an adaptive algorithm which stops when the result is accurate enough whatever the required number of linearizations is to yield a converged result.

This algorithm is performed on the Radiall test case and the results are shown in the bottom part of Table 1. The strategy of linearization (inner or outer) does not influence the computing time for the Monte Carlo simulation, whereas the number of linearizations N d has a greater influence on the computing time and the probability accuracy. Probability values obtained by the proposed procedure are equal to those obtained with the Monte Carlo simulation, because the same sets of samples of the random variables were used. The procedure stopped with a number of linearizations equal to 36. This procedure ensures that such a number is sufficient to yield a result accurate enough which is not the case with only one Monte Carlo simulation. In addition, the computing time to reach an accurate result is reduced (8.45 h) compared to the time needed with the classical Monte Carlo simulation (two times 22.6 h) for the same accuracy. The proposed procedure computes all values in only one simulation, whereas the Monte Carlo simulation only provides one result at a time.

6 Conclusions

The functionality of a product is influenced by design tolerances. Evaluating the quality level of a product at its design stage is therefore a key element; it enables an improvement of the functional quality of the product while reducing the manufacturing cost. This requires methods, such as tolerance analysis, to quantify the impact of tolerances on the mechanism quality. To evaluate the quality level of the product, a mathematical model is required, which must represent its behavior as best as possible. However, the behavior model may be approximated. This approximation provides an inaccurate quality level which is required to be able to manage.

Qureshi et al. (2012) provided a tolerance analysis formulation able to deal with non-linear behaviors. Although the mathematical formulation enables this kind of problem to be solved, a difficulty appears when using an optimization scheme with non-linear constraints, making the result unreliable. The present paper is dedicated to defining linearization strategies for non-linear constraints to solve this problem. It appears that the linearization of non-linear constraints has a real impact on the probability of failure of the mechanism; the obtained result may underestimate the real value, hence over-estimating the quality level. The linearization procedure must be chosen carefully to obtain conservative results. Indeed, depending on the type of requirement (assembly or functional), the conservative strategy is different. In addition, an interesting procedure consists of defining a confidence interval of the true probability of failure using two linearization strategies: outer and inner polygons.

The classical solution method, the Monte Carlo simulation, is not able to provide results for which the chosen number of linearizations is sufficient to yield accurate probabilities of failure. An adaptive procedure is therefore proposed to obtain a confidence interval small enough for the probability. Hence, whatever the studied mechanism, it is possible to obtain a good accuracy for the result, whatever the required number of linearizations.