Application of Advanced Simulation Methods for the Tolerance Analysis of Mechanical Assemblies

. In the frame of a statistical tolerance analysis of complex assemblies, for example an aircraft wing, the capability to predict accurately and fast specified, very small quantiles of the distribution of the assembly key characteristic becomes crucial. The problem is significantly magnified, when the tolerance synthesis problem is considered in which several tolerance analyses are performed and thus, a reliability analysis problem is nested inside an optimisation one in a fully probabilistic approach. The need to reduce the computational time and accurately estimate the specified probabilities is critical. Therefore, herein, a systematic study on several state of the art simulation methods is performed whilst they are critically evaluated with respect to their efficiency to deal with tolerance analysis problems. It is demonstrated that tolerance analysis problems are characterised by high dimensionality, high non-linearity of the state functions, disconnected failure domains, implicit state functions and small probability estimations. Therefore, the successful implementation of reliability methods becomes a formidable task. Herein, advanced simulation methods are combined with in-house developed assembly models based on the Homogeneous Transformation Matrix method as well as off-the-self Computer Aided Tolerance tools. The main outcome of the work is that by using an appropriate reliability method, computational time can be reduced whilst the probability of defected products can be accurately predicted. Furthermore, the connection of advanced mathematical toolboxes with off-the-self 3D tolerance tools into a process integration framework introduces benefits to successfully deal with the tolerance allocation problem in the future using dedicated and powerful computational tools.


Introduction
Low volume, high-value mechanical assemblies e.g. an aircraft wing needs a strict dimensional management procedure in place in order to control and manage variation stemming from the various manufacturing processes to fabricate the parts as well as to assemble them and form the final product.This task is quite critical and should be treated at the early stage of the design process in order to ensure functionality, high performance and low manufacturing costs of the designed product.The main core of any dimensional management methodology is the ability to perform tolerance analysis and synthesis at the early design stage and thus, to predict the variance or the entire distribution of specified assembly key characteristics (AKC) as well as to optimise and allocate design tolerances for the assembly features in the various parts by minimising manufacturing cost.For the latter case, several studies have been performed, e.g. in [1], in which the optimisation problem in most of the cases was formulated using mainly one objective function, the manufacturing cost, and constraint functions based on the worst case error or on the root sum square variance of the AKC.Both where the AKC is denoted by  and it can be expressed as a function of the contributors   , i.e. other dimensions on the parts of the assembly formulating the tolerance chain, as  = ( 1 ,  2 , … ,   ).The function  corresponds to the assembly model.

𝜕𝑓 𝜕𝑑 𝑖
is the sensitivity of the AKC to the contributors   ,   is the tolerance i.e. the range that one dimension   can fluctuate about its respective nominal value   ̅ and    2 is the variance of contributor   used in statistical tolerancing.Usually, in statistical tolerancing, the contributor can be expressed as a random variable with mean value the nominal dimension value   ̅ and standard deviation defined such that N-sigma standard deviations to result in the tolerance range, i.e.   =    .This N-sigma standard deviation spread is also known as the natural tolerance range.Furthermore, considering geometrical tolerances,   can be associated with more than one random variables, for example the positional tolerance of a hole which is generally a function of two parameters, i.e. the magnitude and the angle of the positional vector, that gives the location of the varied centre of the feature with respect to (wrt) the feature frame in the nominal form.It is important to highlight that, in the case of Eq. ( 2), the estimation of the variance of the AKC assumes a linearization of the assembly function  whilst the probability distribution of the AKC is not taken into account in the estimation.
In an attempt to consider more statistical information about the AKC as well as the actual type of the assembly model function to the tolerance synthesis problem, constraint functions of the optimisation problem can be expressed based on the probability of defected products i.e. products that cannot meet the specification limits.Lending the terminology from the structural reliability analysis field, the tolerance synthesis problem thus, can be formulated as a Reliability-Based Optimisation (RBO) problem [2], its general form is given by  ()     [  (  ,   (), ) ≤ 0] ≤   ,  = 1, . ., where  is the vector of the design variables of the problem either deterministic parameters or random variables.For the tolerance synthesis problem, it corresponds to the dimensional or geometrical tolerances on the features of the various parts and are deterministic variables.() is the objective function to be minimized and has been mainly established by the manufacturing cost of the product including cost-tolerance relationships for every applied tolerance.  is the probability of defected products i.e. the probability of the event that the variation of the specified assembly key characteristic (AKC) do not conform to the specification limits, SLs,   (  ,   (), ) stands for the limit state functions i.e. the relationship between the AKC and the specification limits, SLs.It is reminded that AKC is a function of the random variables  1 ,  2 , … ,   and the design parameters  i.e. applied tolerances.The mathematical formulation of the limit state functions can be an explicit mathematical expression or can be given implicitly e.g. by the use of a Computer Aided Tolerance (CAT) tool.  is the target probability of defected products and is usually calculated by the yield, i.e. the probability of the complementary event which generally equals to 99.7% following six-sigma quality approach.  is the uncertainty to which may some of the model parameters can be subjected to.
It is clear from the formulation of Eq. ( 3) that tolerance synthesis is an optimization problem with a nested reliability analysis problem in it, in which   should be estimated for every specified AKC in every iteration of the optimization algorithm.Currently, state of the art commercial CAT tools, e.g.3DCS [3], use Crude Monte Carlo (CMC) simulation to perform the tolerance analysis.It is already common that crude Monte Carlo method can become very time consuming when estimation of small probabilities is involved.This probably explains the fact that tolerance synthesis is still treated in commercial CAT tools by using linearized assembly models and linear optimisation techniques e.g. in [3].The problem of tolerance allocation becomes computationally very demanding whilst both appropriate optimization and reliability methods should be used to successfully deal with complex assemblies.
Thus, the focus of this work is on the tolerance analysis and the fast and accurate estimation of   using advanced simulation methods.A lot of work has been performed toward the development of advanced reliability techniques [4], nevertheless, mainly in the field of the structural reliability analysis rather than on the tolerance analysis field.Reliability methods can be categorized as approximate or gradient-based methods, simulation techniques and metamodeling based methods.It is interesting to notice that few of these methods have been implemented into the tolerance analysis field e.g. in [5] or more recently [6] whilst a systematic study needs to be carried out for their successful implementation in the field.
Therefore, in this work, state of the art reliability methods are explored in order to identify either the most suitable one or the gaps for further development of the probabilistic methods applied to this type of problem.Three simulation methods, namely the Latin Hypercube sampling technique (LH) [7], Quasi Monte Carlo simulation using Sobol's sequences (QMCS) [8] and the Subset Simulation method (SS) [9] are implemented to solve a tolerance analysis problem.The three methods are evaluated in terms of speed and accuracy against crude Monte Carlo simulation predictions.Initially, the case study is presented for an assembly consisted of two parts adopted from [10].Although the example is elementary, it is adequate to highlight all the difficulties introduced in the estimation of the probability of defected products   .The rationale behind the selection of LH, QMCS and SS is explained by thoroughly examining the characteristics of the limit state function of the problem.A second case study is analysed in which the development of the assembly models is accomplished by building appropriate models using the commercial CAT software, 3DCS variation analyst.The ability to speed up the probability estimation of existed CAT tools using collaboratively off-theself statistical tools gives the benefits to exploit the capabilities of each software to the maximum.That is, very complex assemblies can be analysed using professional CAT tools taking advantages of advanced mathematical toolboxes to implement the reliability analyses.This is the first step to establish a process integration and design optimization framework in order to deal with the more complex problem, the tolerance allocation one.

Case study
Two examples were considered in this work to study and prove the efficiency of the selected reliability methods for tolerance analysis problems.More specifically, the first assembly was adopted from [10] and concerns a simple arrangement of two parts on a fixture.The assembly sequence and indexing plan has been thoroughly presented in [10].The AKC of interest is depicted in Fig. 1 and comprises the distance between point M1 in part A from the surface F2 in part B. Variation was introduced by assuming positional tolerances for every assembly feature in the two parts and the fixture.In total eight tolerances were considered.The tolerance values were taken the same for all the features and are presented in Table 1 For the second case study in which a 3DCS model has been built, a simple product of two parts was studied again.The two parts are presented in Fig. 2. The AKC is defined by the distance of the point M1 in part A to the surface F2 in part B. This example is quite similar to the first one.It is presented, however, in order to test the successful implementation and collaboration of advanced reliability methods with commercial CAT tools defined by implicit limit state functions.Positional tolerances were assumed for all the holes with a value given in Table 1.Fig. 2. Case study 2. Two pats forming the assembly in exploded view along with the defined AKC

Assembly models
Assembly models were developed for both case studies.For the first example, the models were based on the matrix transformation method [11] and thus, mathematical expressions were formulated and programmed into MATLAB [12].Briefly, an assembly can be described as a chain of frames [11] among the assembly features of the various parts using homogeneous transformation matrices (HTM).An HTM is defined by where   is the 3×3 rotation matrix and   the 3×1 translation vector.Variation is introduced using the Differential Transformation Matrix (DTM) [11] by multiplying the homogeneous transform   by where   is given by where   ,   and   small rotations and ,  and  small translations wrt frame j representing variation from the nominal form.It can be shown that the AKC specified in Fig. 1 is a function of two homogeneous transforms given by and where   the homogeneous transform from the reference fixture frame S to the global frame G,   the homogeneous transform from the auxiliary frame F to the reference fixture frame S,  1 ′ the homogeneous transform from the compound frame 1 ′ to the auxiliary frame F,  1 ′  ′ the homogeneous transform from the compound frame on part A,  ′ , to the compound frame on fixture, 1 ′ ,   ′ the homogeneous transform from the compound frame  ′ to the auxiliary frame A,   the homogeneous transform from the auxiliary frame A to the reference frame A of part A and finally  1 ′ and   ′ the DTMs that take into account variation in the position of the features on the fixture and the part A respectively.For Eq. ( 8), transformation matrices are similar to the ones presented for part A by interchanging frames 2 ′ ,  ′ , B and K with 1 ′ ,  ′ , A and O respectively.An in depth discussion about the derivation of the assembly models can be found in [10].It is important to notice that the AKC is a function of the imposed positional tolerances expressed by the definition of the DTMs.The interpretation of geometric tolerances into the appropriate DTM matrix format was performed according to [13].
For the second example, 3DCS variation analyst has been used to build implicitly the assembly model defining appropriate features, moves, tolerances and measures.

Probability of defected products and limit state function
An example of the probability of defected products, that is, the products that do not conform into the specification limits can be observed in Fig. 3.It corresponds to the area below the red curves at the tails of the histogram plot and is defined by is the Upper Specification Limit and  is the Lower Specification Limit usually determined by customer requirements.Generally, the requirement for the assembly process is that six standard deviation of the AKC should result in the specification range as depicted in Fig. 3. Assuming a normal distribution for the AKC, each probability in the right hand side of Eq. ( 9) should be equal or less than 1.5E-03, a quite small probability value.The limit state functions  1 and  2 for the two events in the right hand side of Eq. ( 9) are given from the expressions inside the parentheses of the two probabilities.The definition of the limit state function has been given using the same approach with the one in the structural reliability field.That is, when the limit state function is negative then the system fails.In the tolerance analysis field, when the limit state function is negative then a defected product has been produced.It is reminded that the AKC, , is a function of the contributors   or in reliability terms, the random variables.

Fig. 3. AKC distribution in relation to the upper and lower specification limits
To explore more the nature of the tolerance analysis problem and visualise the limit state function of Eq. ( 9), the first case study presented in Fig. 1 is analysed by considering variation only for the pilot hole on part A. This assumption reduces the reliability problem in a two random variable reliability problem and thus, makes possible to visualise the limit state function.The two random variables of the problem are the magnitude () and the angle () of the positional vector that gives the location of the varied centre of the pilot hole in Part A wrt the feature frame in nominal form of part A as depicted in Fig. 4.
A Rayleigh distribution is usually assumed for the magnitude, , and a Uniform one for the angle, .The parameter of the Rayleigh distribution is defined such that three standard deviations result in half of the tolerance range depicted in Table 1.The parameters for the Uniform distribution are set equal to 0 and 360 degrees respectively.The magnitude and the angle of the positional vector are transformed into the appropriate DTM format as presented in [10].The AKC is evaluated based on the HTMs of Eq. ( 7) -( 8).The 3D graph and the contour plot for the limit state function  1 in the physical space are presented in Fig. 5.For clearer visualization, the  was set to a higher value to result in a larger probability value of defected parts, i.e. ( −  < 0) greater than 1.5E-03.Additionally, only two contour lines were plotted in Fig. 5 whilst the axis limits were modified appropriately.It is interesting to notice that the limit state function is a non-linear/non-convex function even for this simple example that involves just one geometrical tolerance.This is due to the sin and cos function of the angle component when describing the positional tolerance in Fig. 4. Furthermore, the failure domain i.e. the design space where the limit state function becomes negative, consists of several disconnected areas as can be seen in Fig. 5. Thus, to summarise the characteristics of the statistical tolerance analysis problem, they are distinguished by the estimation of small probability values, involving a moderate to high number of random variables when considering complex assemblies in which tens of tolerances will be included in the tolerance chain of the AKC, exhibiting highly non-linear limit state functions as well as disconnected failure domains.

Advanced simulation methods
In order to select an appropriate reliability method and successfully implemented it to solve the tolerance analysis problem all the aspects discussed in section 2.3 should be considered and addressed to some point.All the above mentioned observation make this task quite difficult.The estimation of the probabilities in Eq. ( 9) is equivalent to the computation of specific integrals.That is, generalising the problem, the probability of the event that the limit state function () is negative can be obtained by: (() ≤ 0) = ∫   () = ∫  ()≤0 ()  () ∈ ∈ (10) where  is the vector of random variables (i.e.herein, the contributors   ) with joint probability density function   and  ()≤0 () the indicator function in which  ()≤0 () = 1 if () ≤ 0 and  ()≤0 () = 0 otherwise and  = {() ≤ 0} the failure domain.
Due to the disconnected failure domains of the tolerance analysis problem depicted in Fig. 5, typical gradient-based reliability methods e.g. the First or Second Order Reliability Methods [4] (FORM or SORM), were not considered in this analysis because their typical formulation is inappropriate to deal with this type of problems.Nevertheless, further development of this type of methods should be explored in the future because of their fast probability estimation.Thus, herein, and as a first step, only advanced simulation techniques were assessed.Meta-modelling based methods were not further considered.Additionally, Importance Sampling methods [4], although very efficient and suitable for accelerating the probability estimations by sampling the design space at the region that contribute the most to the probability of interest i.e. probability of defected parts, they were not considered herein.This is because most of these methods consists of a searching algorithm based on the FORM and thus because of the nature of the failure domain, the implementation of Importance Sampling will be inappropriate in its current form.Further investigations need to be performed for this type simulation techniques as well.

Crude Monte Carlo
The method used as a benchmark in this work is the CMC.CMC is based on the random sampling of the vector of the random variables .Samples of size  are formed for every random variable and repetitive simulations are performed through the developed assembly models.A sample of output values for each limit state function is obtained.Thus, the probabilities of Eq. ( 9) can be estimated by where   is the number of times that the limit state function becomes negative.A major step in implementing the CMC method is the random number generators.CMC analysis was implemented, herein, using UQLab [14], an advanced general-purpose uncertainty quantification tool developed by ETH Zurich.The tool is based on MATLAB functions and thus, the respective generators were used.Although CMC is quite simple in its implementation and can handle complex, implicit and highly nonlinear limit state functions, however, the coefficient of variation, [•], of the probability estimator,  ̂, in Eq. ( 11) i.e. the ratio of the standard deviation to the mean value of the probability estimator, depends on the probability that is being estimated and the sample size .For a decent probability estimation, i.e. [•] ≈ 10%, as a rule of thumb,  100 samples are in need.This indicates a large number of iterations to accurately estimate small probability values.Additionally, random sampling usually generates clusters and gaps as depicted in Fig. 6.This indicates that the random variable space is not efficiently searched.For disconnected failure domains as depicted in Fig. 5, this will introduce the need for more iterations to cover efficiently the entire random variable space.To alleviate the computational burden associated with Monte Carlo simulation, variance reduction techniques have been proposed to deal with the issue [4].Herein, a stratified sampling technique was explored, namely the Latin Hypercube simulation as well as a Quasi Monte Carlo simulation method based on Sobol' sequences.Additionally, an adaptive Monte Carlo technique namely the Subset Simulation method was also investigated.

Latin hypercube simulation method
The basic idea behind LH simulation is to sample the random variables more efficiently by avoiding clusters and gaps generated in random sampling as depicted in Fig. 6.In order to achieve this, the range of each variable is divided into  non-overlapping intervals on the basis of equal probability.One value from each interval is selected at random with respect to the probability density in the interval.The  values obtain for the first random variable are paired in a random manner with the  values obtained for the second random variable and so on until  *  samples are formed, where  is the number of the random variables.It is important to mention that even the random variables are sampled independently and paired randomly, the final samples can be correlated.In order to obtain samples with correlation coefficients matching the intended ones restrictions in the paired method are usually employed.Finally, the efficiency of the LH simulation can be improved by performing optimisation and iterating the LH method according to some criterion e.g.maximising the minimum distance between any two points.Having the final samples for the n random variables, the probability estimator that the limit state function is less than zero can be estimated by Eq. ( 11).UQlab has been used to implement LH simulation and thus, Matlab algorithms were used.

Quasi Monte Carlo simulation based on Sobol' sequence
Sobol' sequences belong to the family of low-discrepancy sequences [8].Discrepancy is the measure that characterises the lumpiness of a sequence of points in a multidimensional space.Samples made from a finite subset of such sequences are called quasirandom samples and they are as uniform as possible in the random variable space as depicted in Fig. 6.Thus, the random variable space is explored more efficiently, a good characteristic to deal with multiple failure domains.Additionally, the estimated probability in Eq. ( 10) is expected to converge faster than would the respective probability based on random sampling [8].The quasi-random samples can be analysed as any other empirical data set and thus Eq. ( 11) can be used to determine the probability of interest.Herein, the sampling based on Sobol' sequences as well as the randomisation of the sample, i.e. scrambling of the sequence, were performed using appropriate MATLAB functions.The transformation of the sample points of the unit hypercube into the appropriate sample with prescribed marginal distributions and correlation as well as the Monte Carlo simulation were set up using UQLab.

Subset Simulation method
The basic idea behind the subset simulation method is that the estimation of a rare probability, e.g.small probabilities involved in Eq. ( 9), can be performed by means of more frequent indeterminate conditional failure events   so that  1 ⊃  2 ⊃ ⋯ ⊃   = .Thus, the probability of interest can be estimated as a product of conditional probabilities where   = {() ≤   } the indeterminate conditional failure events,   are decreasing threshold values of the limit state function whilst their values need to be specified,  0 is the certain event and  the number of subsets.The threshold values   can be chosen, so that the estimates of the conditional probabilities (  | −1 ), corresponds to a sufficiently large value   ≈ 0.1.Therefore, with an appropriate choice of the intermediate thresholds, Eq. ( 12) can be evaluated as a series of reliability analysis problems with relatively high probabilities to be estimated.The trade-off is between minimizing the number of subsets, , by choosing relatively small intermediate conditional probabilities and maximizing the intermediate conditional probabilities so that they can be estimated accurately without much of computational burden.The probability ( 1 | 0 ) is estimated using CMC whilst the conditional probabilities (  | −1 ) are typically estimated using Markov Chain Monte Carlo based on Metropolis-Hastings algorithms [15][16].UQLab has already build-in functions to implement subset simulation method.
It is important to highlight that merging advanced statistical tools e.g.statistical toolbox of MATLAB or UQLab with off-the-self CAT software gives the capability to analyse complex assemblies efficiently by implementing advanced statistical methods to the problem at hand.Therefore, appropriate user defined interfaces were established herein by linking UQLab and 3DCS variation analyst.Taking advantage of the option of user defined samples into 3DCS, vectorised computing capability was made possible when linking UQLab and 3DCS for any type of simulation method.Therefore, running CMC through UQLab by calling externally 3DCS in a batch mode or using directly 3DCS software resulted in approximately the same computational time.The efficient process integration framework between advanced statistical tools and CAT software gives the opportunity to move one step forward, i.e. toward the direction of the implementation of the tolerance synthesis in terms of RBO.

Results and Discussion
To study the efficiency of the proposed reliability methods in tolerance analysis problems, the two case studies were analysed.Results are presented for the first case study in Fig. 7 in which the probability  1 corresponds to the first probability of the right hand side of Eq. ( 9) with limit state function  1 .The graph in Fig. 7a depicts the [•] of the probability estimator against the number of model evaluations for the selected reliability methods.The graph in Fig. 7b depicts the normalised mean value against the number of model evaluations.The normalisation was performed with respect to the expectation of the probability estimator evaluated by CMC for 1E+06 iterations.For the second case study similar results are depicted in Fig. 8.The mean value, [•], and the coefficient of variation [•], of the probability estimator for each method were derived by an empirical approach.That is, hundreds reliability analysis were performed and samples of the probability values of Eq. ( 9) were, thus, generated and statistically analysed obtaining the mean value and the coefficient of variation of each probability estimator.It is apparent from Fig. 7 and Fig. 8 that advanced simulation methods perform better than CMC.More specifically, SS has the best performance in terms of computational effort and accuracy of the prediction.More specifically, the predictions shown in Fig. 7a framed with the black box correspond to [ 1 ] ≈ 25%, a fair variation of the probability estimator.In order to achieve this variation, it turns out that it should be executed approximately 2,700 model evaluations implementing the SS method, 5,500 simulations using LH and QMCS and almost 10,000 evaluations for CMC.   8 proves that advanced statistical tools can be linked successfully with professional CAT tools and further to accelerate the probability estimations.Similar observations for the efficiency of the reliability methods made for the first example can be stated for the second one.Both problems are analogous.Fig. 7 and Fig. 8 reveal that QMCS and LH methods produce quite similar results.
Finally, it should be stated that there is an advantage of the CMC, LH and QMCS method over the SS method with respect to their implementation.More specifically, for the first group of reliability methods, the sample generation procedure and the calculation of the AKC values need to be performed just one time, yet both probabilities  1 and  2 of Eq. ( 9) as well as any other percentile of the distribution of the AKC can be estimated quite fast.This is not the case for the SS method in its current form in which the assembly model should be evaluated for any new percentile that needs to be estimated.That is for Eq. ( 9), two different analyses should be performed to estimate the two probabilities of Eq. ( 9).Nevertheless, SS still remains the best option in any case.

Conclusions
One major outcome of this work is that statistical tolerance analysis of mechanical assemblies can introduce multiple failure domains in the design space, i.e. separated groups of values of the contributors that result in defected products, as presented in Fig. 5.This is due to the nonlinearity inserted by geometric tolerances such as positional tolerances of cylindrical feature of size.In a fully probabilistic consideration of the tolerance allocation problem, this imposes serious issues on the selection of the most appropriate uncertainty quantification method to proceed.Therefore, for the first time, a comprehensive guidance was provided and three state of the art simulation methods were critically evaluated with respect to their applicability to the tolerance analysis problems.From the analysis, the best option with a good compromise between performance and computational burden turns out to be the Subset Simulation method.It was proved that for the same variability in the estimated probability of defected products, the Subset Simulation performs 4 times faster than crude Monte Carlo.Quasi Monte Carlo method based on Sobol sequences indicated a good efficiency being approximately 2 times faster than crude Monte Carlo and followed by Latin Hypercube simulation technique.
Furthermore, the work indicated the successful connection of advance statistical tools such as UQLab with off the self CAT tools.The successful link realizes the possibility to adopt advanced uncertainty quantification methods in real complex tolerance analysis problem and accelerate the probability estimations.This makes feasible the reliability based optimisation approach for the tolerance allocation problem.
The work is part of an ongoing research in which advanced reliability methods need to be studied further in combination with appropriate optimisation algorithm for the identification of successful strategies to attack to the reliability based optimisation problem.

Fig. 1 .
Fig. 1.Case study 1: two parts on a fixture along with the established frames and the AKC

Fig. 4 .
Fig. 4. Nominal and varied form of pilot hole assuming circular tolerance zone

Fig. 5 .
Fig. 5. 3D graph and contour plot of the limit state  1

+
centre of varied hole + centre of nominal hole Failure domain

Fig. 7 .
Fig. 7. Descriptive statistics for the probability estimator, ( 1 ≤ 0), against the number of model evaluations for the first case study: (a) coefficient of variation (b) normalized mean value

Fig. 8 .
Fig. 8. Descriptive statistics for the probability estimator, ( 1 ≤ 0), against the number of model evaluations for the second case study: (a) coefficient of variation (b) normalized mean value Fig.8proves that advanced statistical tools can be linked successfully with professional CAT tools and further to accelerate the probability estimations.Similar observations for the efficiency of the reliability methods made for the first example can be stated for the second one.Both problems are analogous.Fig.7and Fig.8reveal that QMCS and LH methods produce quite similar results.Finally, it should be stated that there is an advantage of the CMC, LH and QMCS method over the SS method with respect to their implementation.More specifically, for the first group of reliability methods, the sample generation procedure and the calculation of the AKC values need to be performed just one time, yet both probabilities  1 and  2 of Eq. (9) as well as any other percentile of the distribution of the AKC can be estimated quite fast.This is not the case for the SS method in its current form in which the assembly model should be evaluated for any new percentile that needs to be estimated.That is for Eq.(9), two different analyses should be performed to estimate the two probabilities of Eq. (9).Nevertheless, SS still remains the best option in any case.

Table 1 .
. Tolerance specifications for the parts/fixture, first case study