Research in Engineering Design

, Volume 18, Issue 3, pp 129–148 | Cite as

Development and characterisation of error functions in design

Original Paper

Abstract

As simulation is increasingly used in product development, there is a need to better characterise the errors inherent in simulation techniques by comparing such techniques with evidence from experiment, test and in-service. This is necessary to allow judgement of the adequacy of simulations in place of physical tests and to identify situations where further data collection and experimentation need to be expended. This paper discusses a framework for uncertainty characterisation based on the management of design knowledge leading to the development and characterisation of error functions. A classification is devised in the framework to identify the most appropriate method for the representation of error, including probability theory, interval analysis and Fuzzy set theory. The development is demonstrated with two case studies to justify rationale of the framework. Such formal knowledge management of design simulation processes can facilitate utilisation of cumulated design knowledge as companies migrate from testing to simulation-based design.

Keywords

Uncertainty characterisation Error functions Simulation-based design Knowledge management Variant design 

Abbreviations

CAE

Computer-aided engineering

CFD

Computational fluids dynamics

IPD

Integrated product development

FEA

Finite element analysis

FS

Factor of safety

PDF

Probability distribution function

POF

Probability of failure

1 Introduction

Engineering design is a complex decision-making process and, often, decisions have to be based on incomplete and uncertain information. Usually, design analysis is conducted to aid this decision-making process using some form of mathematical model (Shigley and Mischke 1989) for fast and economic evaluation of design performance compared to prototype testing. Typically, design analysis is performed as rough calculations in early design iterations, and as the design proceeds and becomes more complete, complex computational models may be used. Design analysis typically involves key stages including failure modes identification, mathematical modelling, computer coding, representation of design variables and output interpretation. The performance parameters that emerge are used to identify how well design requirements such as fatigue strength or reliability are satisfied and to inform the identification of design changes such as modifications to dimensions, materials or manufacturing processes to overcome limitations. Owing to pressure to reduce product development cost and time, computer simulations have become essential in the development of modern engineering products. However, many large-scale developments of engineering products still rely heavily on physical testing. For instance, an estimated 40% of the cost of product validation in automotive industry is due to the manufacture of prototypes (Honeywell 2001).

Integrated Product Development (IPD) emphasises the integration of Computer Aided Engineering (CAE) and analytical tools in the process of analysis and evaluation leading to the modification and refinement of the product models (Ohsuga 1989). The migration of product evaluation from physical to virtual testing has increased the possibility for errors due to the presence of uncertainties in simulation processes. Uncertainty prevails in many aspects of design analysis—it may arise from the representations of the data, model or process or from variations in natural phenomena. It may also relate to human factors (e.g. error in procedures, choice of alternatives). Model and parameter uncertainties often contribute to discrepancies between observed and predicted results due to simplification and approximation, incomplete knowledge, lack of data etc. Even if the modelling procedures are carried out with the best available knowledge and using the most advanced computer tools, modelling predictions may still be in error compared to physical observations. Poor understanding of uncertainty and therefore limitations in analysis significantly reduces the ability to propose more radical designs. For simulation-based design to be successful, there is a need for more formal organisation of knowledge to aid decision-making for technical management of the IPD process. This is a crucial requirement if design is to move away from a dependence on physical prototype test and expensive development programmes. Appropriate knowledge organisation and management will allow a judgement to be made on the confidence that can be placed in a simulation process and will indicate where data collation, experimental work and research and development are needed to allow a simulation-based design environment to operate.

This paper discusses a framework for uncertainty characterisation based on the organisation of the extent and nature of knowledge regarding the design leading to the development and characterisation of error functions for the representation of disparity between simulation results and experimental and in-service observations. The framework was developed based on characteristics deduced from twenty literature cases from different design domains, and these involve varying degrees of uncertainty in the simulation data and varying quantities of evidence from test or service performance. These cases illustrate the rationale for the development of suitable error functions for each of the categories in the classification they represent. Two case studies conducted as part of the research on this subject are also used to substantiate the framework proposed and are elaborated with regards to the framework and error functions. The error characterisation is to be incorporated into design simulation and analysis, particularly in the modelling of variant design applications in order to facilitate informed decisions with improved confidence and reduced risks. It is envisaged that many companies could potentially benefit from better utilisation of knowledge and observations from in-service experience and in particular past failures as they migrate from testing to simulation-based design.

2 Methods for uncertainty characterisation in engineering design

Approaches to engineering design under uncertainty have traditionally been deterministic, which is based on the use of factor of safety (FS) to accommodate unknowns. The FS are applied in the design of structures to allow for uncertainty in loading, the statistical variation of material strengths, inaccuracies in geometry and theory and the grave consequences of some failures. Deterministic approaches fail to effectively quantify uncertainty for reliability and safety as the selection of parameter values greatly depends upon the engineer’s knowledge and experience resulting in inconsistent or sub-optimal designs. The complexity of modern products and their modelling in CAE systems necessitate a more complete understanding of uncertainty in order to achieve improved accuracy and reliability in the end results. As a precursor to further discussion, some useful methods for uncertainty characterisation in engineering design are briefly reviewed.

2.1 Probability theory

The advantage of probability theory in uncertainty characterisation for safety and reliability analysis is evident due to the inherent variability in loading and strength (Cornell 1969; Pugsley 1966). Probability theory makes use of the same models as deterministic design, but accounts for variability in the design parameters by describing these as random variables determined ideally from testing statistically large sample sizes of the characteristics and properties of interest. Computational and statistical methods are then used to investigate the combination and interaction of these design parameters in the performance function or failure models. By far the most appropriate means for characterising uncertainty of variability type is through Probability Density Functions (PDF) (Moens and Vandepitte 2004), providing sufficient information on the frequency of occurrence characterised by the PDF is available.

An important application of probability theory is in the prediction of reliability or probability of failure (POF). For example in reliability analysis, the load and strength can be represented by PDFs and the POF can be evaluated by the integral of the joint probability function, p(x) of the random variables in the system over the entire failure region as
$$ {\text{POF}} = {\int\limits_{{\text{failure\,region}}}} \cdots {\int {p(x){\text{d}}x} } $$
(1)
The algebra of random variables can be applied for arithmetic operations concerning normally distributed variables (Haugen 1980). The variance for a function g(xi) of a combination of more than two statistically independent variables, xi can be analytically approximated by the variance equation, where σ is the variable standard deviation and μ is its mean as
$$ \sigma ^{2}_{g} \approx \Sigma {\left( {\frac{{\partial g}} {{\partial x_{i} }}} \right)}^{2} \sigma ^{2}_{{x_{i} }} $$
(2)
Many advanced computational techniques based on the probability theory are available (Riha et al. 2002), for example, Advanced Mean Value (Wu and Wirsching 1987), Fast Probability Integration (Wu and Wirsching 1987), Monte Carlo and Latin Hypercube simulations (McKay et al. 1979). These techniques have been successfully applied in the design of aircraft and marine structures (Ayyub and de Souza 2000; Mavris and deLaurentis 2000).

2.2 Interval analysis

There are many situations where uncertainty cannot be expressed using PDFs owing to lack of precise knowledge of probability information. In this situation, interval numbers representing the range of parameter values by lower and upper bounds, i.e. the intervals of deviation of a parameter from its nominal value may be a useful approach. The mathematics for handling of interval numbers is called the interval analysis or interval arithmetic (Moore 1966). The interval arithmetic (or analysis) deals with arithmetic operations involving interval numbers, given by
$$ {\left[ {a,{\text{ }}b} \right]}{\text{ }}\Theta {\text{ }}{\left[ {c,{\text{ }}d} \right]} = {\left\{ {x{\text{ }}\Theta {\text{ }}y|a \le x \le b,c \le y \le d} \right\}} $$
(3)
where Θ denotes +, −, ×, and /, except that [a, b]/[c, d] is not defined if 0 ∈ [c, d].
An important property observed for the interval arithmetic is that the distributive law is not always true but it can be shown that for any interval numbers X, Y, and Z
$$ X \times {\text{(}}Y + Z{\text{)}} \subseteq {\text{ (}}X \times Y{\text{)}} + {\text{(}}X \times Z{\text{)}} $$
(4)
which is known as subdistributivity. A key issue in interval analysis is that it is over-conservative with repeated variables, although there are developments towards overcoming this. These will not be discussed here but readers are referred to Rao and Berke (1997) and Ugarte and Sanchez (2003). Applications of interval analysis have been observed largely in structural analysis and optimisation (Rao and Berke 1997; Rao and Cao 2002), with recent development of the interval Finite Element Analysis (FEA) (Modares et al. 2004; Tzannetakis et al. 2004).

2.3 Fuzzy set theory

In Fuzzy set theory, a formal definition of the possibility distribution is given by Zadeh (1975) in which A is assigned as Fuzzy set over a universe of discourse with a membership function, μA, defined between 0 and 1
$$ \mu _{{\text{A}}} {\left( x \right)} = \left\{ {\begin{array}{*{20}c} {{{\text{1 if }}x \in A}} \\ {{{\text{0 if }}x \notin A}} \\ {{p{\text\,{:0}} < p < {\text{1 if }}x{\text{ partially belongs to A}}}} \\ \end{array} } \right. $$
(5)
The proposition “X is A” associating the variable X with the concept represented by A, induces a possibility distribution ΠX on X, which restricts the values, which X may take. Thus, the possibility distribution function of ΠX, denoted by πX is defined to be numerically equal to the membership function of A as
$$ \pi _{{\text{X}}} {\text{(}}x{\text{)}}\,{\mathop = \limits^\Delta }\,\mu _{{\text{A}}} {\text{(}}x{\text{) for all }}x{\text{ }} \in A $$
(6)
Standard arithmetic and algebraic operations can be extended to Fuzzy arithmetic and Fuzzy algebraic functions by means of the extension principle (Zadeh 1975). The extension principle describes the mapping from Fuzzy sets A1, ..., An in X1, ..., Xn to a Fuzzy set B in Y through the function f, where \( B = f{\left( {A_{1} ,..,A_{{\text{n}}} } \right)}. \) The membership function of B is given by
$$ \mu _{B} (y) = {\mathop {\sup }\limits_{y = f(x_{1} ,...,x_{n} )} }{\left\{ {\min {\left[ {\mu _{{A_{1} }} (x_{1} ),...,\mu _{{A_{n} }} (x_{n} )} \right]}} \right\}} $$
(7)
An important concept in Fuzzy theory is the notion of α-cut, as illustrated in Fig. 1. For a number α in the unit interval [0,1], an α-cut of a Fuzzy set A is a real set that consists of all elements whose membership in A are larger than or equal to α, which is written mathematically as (Klir and Smith 2001)
$$ A_{\alpha } = {\left\{ {x|\mu _{A} (x) \ge \alpha } \right\}} $$
(8)
Fuzzy set has been used to represent the concept of plausibility of a value under a possibility distribution (Zadeh 1978). An imprecise parameter may be assigned with a membership function corresponding to various degrees of possibility, indicated by α-levels. The concept of possibility has been criticised due to the lack of an operational definition (Cooke 2004), however, the author admitted that such definition is not impossible. Given the extensive research on this topic and more applications in solving real world problems, we contend that a definition would emerge eventually. We refer to the early concept of possibility defined by Shackle as the degree to which an observer would be surprised by an occurrence (or potential surprise) (Parson and Hunter 1998). According this definition, if an event is wholly possible, then there is no surprise attached to its occurrence, vice versa, if an event is believed to be wholly impossible, then its occurrence will be accompanied by the maximum degree of surprise. This notion is consistent with the heuristic connection between possibility and probability, i.e. any event that is probable must also be possible but the converse is not true (Zadeh 1978). Mathematically, the relationship between probability and possibility known as the consistency principle can be expressed as an inequality relationship (Dubois et al. 2004)
$$ P(x) \le {\prod {(x)\quad {\text{for}}\,\forall } }x \subseteq {\Re } $$
(9)
where
P(·)

probability

π(·)

possibility.

For the representation of numerical uncertainty, a class called the normal Fuzzy numbers is generally used (Moens and Vandepitte 2004). A normal Fuzzy number has a maximum value of unity with membership increasing towards the peak and decreasing away it as shown in Fig. 1. The real interval corresponding to α = 1 for an imprecise parameter assigned with a Fuzzy number indicates values of definite possibility, values between this point/range and the support1 have uncertain memberships, and any values beyond the support have no possibility of occurrence. The wider the support is, the greater the imprecision in the parameter (Wood et al. 1990). Fuzzy sets best describe uncertainty due to imprecision, applications of which have been seen in structural analysis and evaluation, and also in representing uncertainty in early design attributes etc. (Möller et al. 2000; Stroud et al. 2001; Venegas and Labib 2005).
Fig. 1

Notion of α-cuts in Fuzzy theory

2.4 Uncertainty in engineering design

The initial stages of the design process are most uncertain owing to under-defined and ambiguous information regarding the product to be designed. Hans-Jurgen suggested that the engineering design process starts with mostly linguistic and only partly interval-valued or numerical information but aims to obtain mostly numerical information (Hans-Jurgen 2002). Therefore, by definition, the nature of information uncertainty evolves during design as it is subjected to continual refinement (Giachetti et al. 1997). As design knowledge is accumulated from testing and analysis throughout the course of the product introduction process (Fajdiga et al. 1996), uncertainty in the data and model becomes more important to resolve, but at the same time the understanding of uncertainty more complete and thus probabilistic methods may be suitable (Moens and Vandepitte 2004). Probabilistic methods require accurate description of data and model to enable meaningful results to be produced. In most design problems and especially in the early stages, data for estimating probability distributions is particularly limited. Probabilistic methods are thus likely to be inappropriate where the impact of uncertainty in design is greatest. In fact, the success of classical probability methods has been largely confined to those well-established design domains that have sufficient information for probabilistic analysis to be carried out.

As noted, uncertainties in design process and analysis are traditionally accounted for in an informal and ad hoc manner, for example, through the use of safety factors and conservative calculations. As variability and uncertainty inherent in the process and system are not explicitly defined, deterministic approaches are not conducive modern risk assessment, reliability prediction and robust design (Langley 2000). Deterministic approaches often lead to suboptimal products and in some situations, opportunities for designing more competitive design solutions may be lost (de Neufville 2004). Previously there have been attempts to bridge the design philosophies of safety factor and probabilistic uncertainty (McCalley 1957; Mischke 1970) but with limited success. In this respect, the possibility theory is a promising approach in order to facilitate uncertainty definition where limited information exist, therefore providing a mechanism to express uncertainty levels that are not suitably quantifiable by intervals or PDFs. The various uncertainty theories are, however, based on different axioms and research is generally limited to and done within the frameworks of these axioms. It is important that a conceptual framework is provided to capture the evolving nature of uncertainty and to allow for its impact on the decisions that are taken during the design process to be more fully appraised. Besides, formal information and decision-support systems are also decidedly weak in supporting management of uncertainty in engineering design and simulations. The framework presented below may provide a suitable basis for incorporating uncertainty in such systems.

3 Framework for uncertainty characterisation

To allow the evolution of knowledge regarding uncertainty throughout the course of a design analysis domain, a framework for aiding uncertainty management in the product development is devised. The framework will allow the assessment of confidence and uncertainty before the design becomes mature, and help in minimising risks in decisions based on simulation outcomes during development stages. The basis for the framework was drawn from experience with case studies and deduction from a wide range of literature cases. The design analysis characteristics are first outlined prior to the discussion of a classification for organising knowledge of uncertainty in design simulations.

3.1 Design analysis characteristics

It is hypothesised that engineering design analyses of the type the paper will address have the following characteristics but not all aspects will always be present. These characteristics can be identified in the supporting cases presented in a later section.

3.1.1 A true relationship exists between the inputs and the outputs for the performance function

A true relationship refers to the actual behaviour of a real physical system in relation to some external parameters, where various aspects of its behaviour are governed by the laws of physics and other sciences. Engineers seek to represent these true relationships as faithfully as possible using their knowledge of the governing physical phenomena in the system of interest. However, their understanding is often limited by commercial pressures and legislation and, as a result, modelling activities are a compromise between model detail and available resources (Apeland et al. 2002; Sargent 1998). A true relationship will exist for relating the design parameters to the performance parameters (for each specific function of the artefact). However, there will always be uncertainty and incompleteness in the modelling of this true relationship, and these may be reduced only if resources are expended to build and validate the models.

3.1.2 Evidence from in-service experience of the artefact performance may be related to the function

In-service evidence is collected from products subjected to real use conditions and is a result of users’ interactions with the products. An example is evidence of the range of input values for which satisfactory (or unsatisfactory) performance of the product has been observed. The evidence may apply only to the overall functions (e.g. there may be knowledge that an artefact has a developed a fatigue crack, but no measurements of the stress cycles that have led to the crack). Notionally, companies will have collected industry-specific experience with in-service product behaviour, failure records and correlation analyses related to their products.

3.1.3 Experimental or test evidence exists for the relationship between inputs and outputs

This evidence is collected when products are subjected to a series of load cases which are representative of the expected in-service conditions. Prototypes may be built and tested under predicted in-service conditions and need to meet the performance criteria set in the testing phase. Some physical tests may be performed before sign-off to investigate critical aspects related to the product performance. These are often limited as testing is generally costly and time-consuming, which are seen as competitive disadvantages.

3.1.4 Analytical functions exist that provide the relationship between the inputs and outputs of the system behaviour

For each load case, the function may be approximated or modelled from physical principles or sometimes, heuristics to reflect the true relationship in characteristic (Sect. 3.1.1) above. These analytical functions will deviate from the true relationship owing to limitations in the modelling approach. Lack of knowledge and deliberate simplifications for economy and convenience (Nilsen and Aven 2003). Alternative models will be available for the analysis of a performance target depending on the level of detail or accuracy desired. Examples of models used for analysis are
  1. (a)

    Analytical equations derived from first principles.

     
  2. (b)

    Numerical models, e.g. FEA and Computational Fluids Dynamics (CFD) where the boundary conditions, mesh sizes and other analysis variables have great influence on the accuracy of the results.

     

3.1.5 Approximation functions may be identified for the analytical functions

Approximation functions have been used for decades in empirical model building. Empirical relationships obtained from observation of a set of data and often used when the behaviour of interest is too complex to model using first principles. More recently, approximation functions such as response surface functions have become more popular in probabilistic design to avoid the repeated computation of complex actual performance functions (Bucher and Bourgund 1990). An approximate model over a limited region of interest may be obtained by fitting a metamodel to the relationships in characteristics (Sects. 3.1.3 and 3.1.4). The approximate models introduce additional errors due to deliberate simplification in approximating the actual functions (for example through the use of low order polynomials) (Box and Draper 1987).

3.1.6 Design parameters that characterise the relationships in Sect. 3.1.4 and Sect. 3.1.5

Design parameters are defined by Nam Suh (2001) as the key physical variables in the physical domain that characterise the design that satisfies the specified functional requirements. The design parameters may have great influence on the accuracy of predicted results as uncertainty in them is propagated through the functions in characteristics (Sects. 3.1.4 and 3.1.5) to the performance parameters. These functions will be generally referred to as the transfer functions. Distinction may also be made between controllable and uncontrollable design parameters. Controllable parameters can be varied (by designer’s choice) over the design space to achieve the desired product design such as design and tuning variables. Examples of uncontrollable parameters include environmental variables and noise. Design parameters may be of discrete or continuous nature, and may be statistically correlated or dependent on one another. For discrete variables, probability mass functions and discrete Fuzzy numbers can be used instead to represent the uncertainty. All the uncertainty in these parameters will be propagated to the performance parameters through the transfer functions.

Performance simulation of an engineering product may involve multiple design targets, each associated with different load cases and failure modes. The analysis of each load case will be based on design parameters, and will involve one or more transfer functions for the computation of the performance parameters of interest. Transfer functions are known as performance functions when the characteristics of interest relate to the performance of a product, or objective functions in optimisation problems. These transfer functions may take different forms depending on the mechanism for the physical process. For instance, the thermal fatigue, mechanical fatigue and wear of an engine will be modelled differently. In design simulation, virtual prototypes built on various theories and computational models will be tested against the performance targets that the product is designed to achieve, and the performance parameters identified from these virtual prototypes may be compared to physical evidence to verify the accuracy of the virtual evaluation. A classification that organises the comparisons between predicted and observed performance parameters is presented next.

3.2 The classification of uncertain design correlations

The framework proposed in this paper allows the design correlations described in Sect. 3.1 above to be classified according to the extent and nature of the evidence concerning uncertainties both in design data and in the simulation models. A classification that organises the knowledge of this evidence is represented through a three-dimensional Cartesian system is shown in Fig. 2. The axes of this figure are i-axis—knowledge of uncertainty in the performance parameter, j-axis—extent of the physical performance evidence and k-axis—number of entities in the performance space. The i- and j-axes are scaled according to the completeness of data available for characterising the variables. The performance space (k-axis) is organised according to the number of solutions of the design concept for which knowledge and evidence of uncertainty for a performance target are available. The origin (0,0,0) represents cases with novel concepts where no prior information can be drawn from existing system or analysis model. In this situation, indirect and subjective evidence is sought to accumulate further understanding about the system to be designed. When sufficient confidence is attained in a design solution, further examples may be produced and experience with them will validate initial findings. For an established design there may be many exemplars, and as more similar systems are designed and validated, the knowledge accumulated in this manner enhances the knowledge base for adaptations of the design, signifying the progression of design know-how, i.e. knowledge for each axis improves as one moves away from the origin. The definitions for scales in each axis are detailed in subsequent sub-sections.
Fig. 2

Classification of uncertain design correlations, (i, j, k) = (performance parameter, evidence, performance space)

3.2.1 Performance parameter (i-axis)

The performance of the design is characterised by performance parameters, derived from the design parameters by considering the design subject to some operating regime, and as previously noted, the relationship between design parameters and performance parameters may be considered to be modelled by some form of transfer function(s). Variability and uncertainties can be found in the design parameters (e.g. dimensional or material properties), in the characterisation of the operating regime (the load cases) and in the transfer functions themselves (Barton et al. 1999). As said, models used in the mapping are approximations to real world systems, in which there are conditional assumptions, limited available data or incomplete knowledge (Laskey 1996). Therefore, uncertainty in both design parameters and transfer functions is reflected in the performance parameters.

A classification of performance parameters according to the completeness of the data used to describe them has been devised. The scale progresses from the limited data contained in a single deterministic value to the increasing completeness or precision of a PDF, as shown in Fig. 2. The definition for each scale position in the i-axis is
  1. 1.

    Deterministic means the output of a transfer function is only available for a single value (or set of values), obtained by propagating a nominal set of design parameters through the function.

     
  2. 2.

    Interval value means the output of a transfer function is available as intervals with no information of the likelihood of occurrence except for the absolute lower and upper bounds. This interval value is derived from objective rather than subjective information.

     
  3. 3.

    PDF means the output of a transfer function is described probabilistically. The normal distribution is supposed here due to its wide use in engineering but other types of PDFs such as the Weibull and lognormal may be used (Bury 1999).

     

3.2.2 Evidence (j-axis)

Evidence regarding the actual performance of the modelled artefact is typically gathered from in-service product behaviour, failure records, prototype tests and correlation analyses for similar but not necessarily identical products. Correlations and validation of simulation results against experimental test data can be complicated by the lack of evidence due to resource constraints and by difficulty in obtaining real life data, but in some cases there may be abundance of evidence for engineers to draw correlations against. Therefore, evidence of varying degrees may exist for correlations and used in validations to justify the confidence in design analysis (Kleijnen 1995). The scale for classifying the availability of validation evidence has been defined as follows:
  1. 1.

    Single observation—validation evidence is available for a single observation only, for example from a prototype test.

     
  2. 2.

    Range of observations—validation evidence is available for a small number of observations, but no inference on likelihood could be drawn from these observations except for the absolute bounds.

     
  3. 3.

    Statistical set of observations—validation evidence is available for a large number of observations, sufficient for statistical data to be derived, providing a reliable source of evidence.

     
Besides the classification according to the extent of evidence available, the classification of evidence may need to be extended, for example, to cover for its source. The sources of evidence available for correlation purposes can be distinguished as primary and secondary sources according to their correspondence with the analysis. In general, the primary evidence provides a direct correlation between modelling results and experimental measurements for an engineering product leading to the highest confidence for validation purposes. Where primary evidence is not available, secondary or indirect evidence may have to be sourced from experience with the performance of similar products in similar service conditions or from other similar cases where the same manufacturing process is used. Using this type of evidence involves some decision-making in justifying its confidence and reliability. Some examples of secondary evidence are:
  • Evidence of performance of similar techniques for similar models—e.g. generic confidence in CFD models or the NAFEMS technical benchmarks (NAFEMS 2005) used to draw conclusions about the current numerical model.

  • Evidence of satisfactory/unsatisfactory performance of similar but not identical artifacts—e.g. in-service or historical evidence of satisfactory or unsatisfactory performance of artefacts.

  • Evidence of performance of parts of a more complex process—e.g. stress analysis as part of a fatigue analysis process.

  • Results from other validated models—e.g. verification of results from a new method by comparison with conventional solutions.

In case of secondary evidence, metrics for establishing proximity and similarity will need to be developed in order to relate to a performance of interest. For instance, methodology similar to those described in McAdams and Wood (2002) to determine the functional similarity between products may be useful.

3.2.3 Performance space (k-axis)

The performance space is the solutions region that has resulted from the exploration of several parameterisations of the design parameters for a product (as in parametric design). For example, load capabilities may be produced for bearings of various sizes but a single design principle. Both analysis and evidence about product performance may exist for a single set of product parameters (e.g. fatigue life of a particular bearing geometry), or more extensively for a wide range of parameters in the feasible design space (e.g. fatigue life for many similar bearings of different sizes). For state-of-the-art systems, analyses and tests for each performance parameter may have been conducted only for a specific design instance. In others, usually several combinations exist and the same type of analytical procedures and tests may have been conducted for verifying the behaviour of a number of instances. The latter enables the comparisons of predicted results with experimental data for more parametric cases to evaluate the modelling capability in the performance space. The classification proposed categorises the performance space into three main groups:
  1. 1.

    One means the correlation of performance parameter and evidence is only available for a single instance.

     
  2. 2.

    Small number means the correlation of performance parameter and evidence is available for a few instances to suggest the correlation within a limited range of parameters in the performance space.

     
  3. 3.

    Large number means the correlation of performance parameter and evidence exists for a relatively large number of instances to suggest the significant understanding of the correlation in the performance space.

     
In truth, the performance space may be continuous or discrete consisting of numerous points constrained by the feasible solutions region, therefore discretisation of the scale is arbitrary and can only be defined in context. The categorisation may be judged taking into consideration the number of instances required to fit a sufficiently “good quality” performance surface. This can be influenced by, for example, the nature of the function and dimensionality of the design space.

The classification proposed intends to cover diverse situations in engineering validation. For example, in cases where the collection of experimental evidence on the behaviour of the whole artefact is prohibited by cost or difficulty in obtaining real data (e.g. reliability of nuclear plant), engineers typically resort to extensive use of secondary evidence combined with highly conservative design strategies. In complex analysis cases, deterministic analysis may be carried out but a large amount of physical evidence may be collected to qualify performance of a design against the specified values of the design targets or functional requirements in the design process. For well-established design principles and extensive field experience it may be possible to undertake fully probabilistic analysis. To substantiate and populate the classification in the framework, various cases from different design domains have been collected and will be discussed in a later section.

4 Error functions

From the evaluation of the requirements for uncertainty characterisation, a method for representing the disparity between simulation results and experimental observations via error functions is developed next. The requirement for the error functions is to record uncertainty information from existing systems and to populate the error characterisation for use in future design applications to assist decision-making and to improve simulation results. From the various combinations of model, data and the extent of available evidence, it is proposed that it may be possible to identify an error function for each load case, design target or failure mode using:
$$ {\text{Evidence}} = {\text{transfer}}\;{\text{ function}}\; {\Uptheta} \;{\text{error}}\;{\text{function}} $$
(10)
where
Θ

addition and/or multiplication.

The addition or multiplication operations allow for correction of the predicted response from the transfer function to better reflect the reality. Similar formulations have been mentioned by Zio and Apostolakis (1996) in what was termed an “adjustment factor approach”. According to the authors, the adjustment factor may assume a hypothetical PDF reflecting the model prediction uncertainty (in effect a Bayesian belief). The approach however, seems to have only been adopted in an ad hoc manner in risk analysis.

4.1 Uncertainty characterisation with error functions

The classification in the proposed framework leads to several categories of correlation case as summarised in Table 1. The table illustrates the categories of correlations with varying degrees of uncertainty describing the i- and j- variables, obtained from analytical and physical evaluation respectively. This categorisation may be used to suggest the most appropriate method for handling uncertainty in the design simulation including Fuzzy set theory, interval analysis and probability theory. Suitable error functions may be formulated for each category using the conventional uncertainty theories singly, or in combination. The scales in the k-axis classify the extent of knowledge available for correlation according to experience with past variants but have no effect on the formulation of error functions within the categories. Figure 3 illustrates two distinct classes. The unhatched area (A, D, F) indicates correlation between parameters is characterised by a single uncertainty theory [probability theory (F), interval analysis (D), Fuzzy set theory (A)]. The hatched area (B, C, E) indicates correlations that require a combination of uncertainty theories to formulate an error function. Even though the mathematics within conventional uncertainty theories is well established, research in dealing with a combination of different uncertainty theories is less mature (Ferson et al. 2004). A hybrid approach for characterising probabilistic uncertainty and imprecision has been proposed using the probability box (or p-box) (Ferson et al. 2002) where some applications are reported (Rekuc et al. 2006). Several transformations have been proposed in the literature (Dubois et al. 2004; Klir and Smith 2001; Zadeh 1978) according to the consistency principle in Eq. (9). In this paper the principle of maximum specificity proposed by Dubois et al. (2004) is adopted for the transformation from probability to possibility for deriving the error functions in the combined categories. The possibility distribution, π(x) that optimises the information content obtained from this transformation encodes the family of confidence intervals around the mode of the PDF, p(x), i.e. the α-cut of π(x) is the (1 − α) confidence interval of p(x) (Dubois et al. 2004).
Table 1

Categories of correlation between analysis and experiment results (i: performance parameter, j: physical evidence and k: performance space)

Fig. 3

Precision of error functions and the progression of confidence

A pragmatic method for recording error is proposed by separating the first and second moments of data to capture the systematic and random aspects of uncertainties for existing systems in correlation with evidence. In statistical terms, the first moment of a sample of data is the central tendency (or mean), and the difference between the means of the actual and predicted performance parameters is the systematic or bias uncertainty measure, φ. The second moment of data is the measure of dispersion (or variance), and the difference between the variances of the actual and predicted performance parameters is the random uncertainty measure, ε. The error functions (EF) can be defined as
$$ {\text{EF}}(\varphi ,\varepsilon ) = f(\phi _{{{\text{real}}}} ,\phi _{{{\text{TF}}}} ,\delta _{{{\text{real}}}} ,\delta _{{{\text{TF}}}} )\left\{ {\begin{array}{*{20}c} {{{\text{EF}}^{{\text{1}}} {\text{: $ \phi $ }} = {\left( {\phi _{{{\text{real}}}} - \phi _{{{\text{TF}}}} } \right)}{\text{/}}\phi _{{{\text{TF}}}} }} \\ {{{\text{EF}}^{{\text{2}}} {\text{: $ \varepsilon $ }} = \updelta_{{{\text{real}}}} {\text{/ $ \delta $ }}_{{{\text{TF}}}} \begin{array}{*{20}c} {{}} & {{\begin{array}{*{20}c} {{}} & {{}} \\ \end{array} }} \\ \end{array} }} \\ \end{array} } \right. $$
(11)
where
EF1

denotes the error function accounting for systematic discrepancy

EF2

denotes the error function accounting for random discrepancy

ϕreal

first moment of the observed performance parameter

ϕTF

first moment of the predicted performance parameter

δreal

second moment of the observed performance parameter

δTF

second moment of the predicted performance parameter.

This definition of systematic and random uncertainty measures can be extended to cases described by Fuzzy and interval numbers, by denoting the first and second moments of data with their equivalent parameters, e.g. ϕ = average and δ = range of an interval number.

The current development of error functions is based on the assumption of symmetric data and that, for a deterministic parameter, a suitable Fuzzy number may be elicited from experts. A deterministic design approach results in loss of second moment (or spread) information. Category A is the correlation between a deterministic performance parameter and a single source of evidence. In this situation, engineers are uncertain about the bounds or probability of the predicted and experimental responses and need to rely on their expert judgment or experiential knowledge with regard to the uncertainty, which will be imprecise in this case. We suggest that for a deterministic parameter, a suitable Fuzzy number may be elicited from experts to represent a subjective assessment of imprecision in the performance parameter and evidence, by assigning a possibility distribution through a membership function, μ(x) to it (Zadeh 1978). In fact, intervals and subjective probability distributions (a Bayesian approach) are also legitimate candidates for representing uncertainty in the lack of objective data. The choice is with the engineer but the results should be interpreted accordingly. Systematic and structured methods are important to ensure the precision and reliability of subjective measures obtained (O’Hagan 1998) and methods have been proposed and studied (Gong 2006; Sandri et al. 1995). Some researchers have claimed that the expression of possibility and Fuzzy set appears to be more pragmatic and intuitive to experts when judgement is required (Raufaste et al. 2003). In this paper, the focus is on the development a framework that enables a consistent interpretation of confidence in the simulation predictions in the light of evolving uncertainty in the design process. We assume that, in all cases, the quality of input data to the system is the best in class and should be refined continually as new knowledge and evidence becomes available.

Uncertainty is widely classified into aleatory and epistemic: aleatory uncertainty refers to stochastic variability whereas epistemic uncertainty refers to the lack of knowledge. Separation of aleatory and epistemic uncertainty has been widely recognised but little work is seen in mitigating both types of uncertainties in simulation-based design environments (Du and Chen 2000). It is envisaged that the isolation of systematic and random uncertainties will better represent epistemic uncertainty and stochastic variability arising from simulation processes. Although the error functions will not correct for uncertainty with absolute accuracy, they are potentially useful to give insights into the accuracy of the data and models used in simulation procedures for design applications. The uncertainty measures in error functions can be applied for corrections of another design simulation via:
$$ \phi _{{{\text{real}}}} = \phi _{{{\text{TF}}}} (1 + \varphi ) $$
(12)
$$ \delta _{{{\text{real}}}} = \delta _{{{\text{TF}}}} \times \varepsilon $$
(13)
and also from Eq. (11), we can see that the errors are minimum when:
$$ \varphi \to 0\;{\text{and}}\;\varepsilon \to 1 $$
(14)
where
φ

systematic uncertainty measure

ε

random uncertainty measure.

The error functions support uncertainty characterisation by indicating the discrepancies in modelling and observed results, aiding the assessment of confidence in data and model representations for an analysis procedure. For example, the systematic uncertainty measure, φ, may be used to judge if a modelling approach is consistently over (φ < 0) or under-estimating (φ > 0) the actual behaviour or performance parameter of interest. Accuracy in alternative models may be compared and over-conservatism resulting in uneconomic designs can be avoided. A value close to zero for φ indicates less bias uncertainty, therefore the model predicts the bulk of the performance parameter more accurately. The random uncertainty measure, ε, may also have significant influence on the accuracy of results, especially for performance criteria that are sensitive to dispersion, e.g. probability of failure. Under-estimation of variability (ε > 1) in this case will cause higher than expected number of product failures, whereas over-estimation of variability (ε < 1) may cause designers to specify tighter specifications than necessary, e.g. manufacturing tolerances or material strength that results in extra cost and weight. A value close to unity for ε indicates less random uncertainty.

4.2 Improving confidence in design analysis

The precision in error functions developed according to the classification in the framework can be used to judge confidence in correlations between the analysis performance parameter and evidence. For instance, a scale (graduated shades) for confidence related to the precision in error functions is indicated in Fig. 3. The density of shading is an indication of the completeness of data describing the analysis performance parameters and evidence. The precision (and confidence) in an error function does not however, imply that the actual response is accurately predicted. For instance, the error functions obtained from correlation between probabilistic parameters (Category F) are precise but accuracy in the modelling may be low due to large systematic errors between the modelling results and evidence. The error functions can also be used to identify critical areas and to optimise allocation of resources to reduce errors—to select more suitable design representations, to focus effort in data collation, and to select suitable design techniques by indicating relative measures of uncertainty and confidence among the available alternatives. For instance, in situations where the bias uncertainty is significantly larger than the random uncertainty, i.e. φ ≫ ε, a deterministic analysis and a scalar valued error function may suffice until more detailed model with higher accuracy can be justified. In this manner, error functions provide a mechanism to estimate the risks and uncertainties inherent in an application of analysis or simulation and to assess the adequacy of simulations in replacement of prototype tests in order to focus engineering effort to improve confidence in simulation-based design.

Arguably, ultimate confidence in engineering design can only be achieved by accounting for uncertainty in an objective manner. Probability theory is by far the most appropriate to represent uncertainty in engineering simulations owing to its suitability to propagate numerical and objective uncertainty information. Other methods like interval analysis and Fuzzy set theory may play a greater role in providing a means to deal with uncertainty during the early and intermediate design stages, and for novel artefacts. Ideally, the design approach should progress diagonally upwards through design iterations as design data becomes more complete, i.e. from deterministic to probabilistic values as more information is gathered. Nevertheless, some iterations may be necessary when decisions for design changes are made and new design parameters are acquired. The error representation through error functions will also evolve from imprecise possibility to precise probability. The framework provides a roadmap to identify a progression from current state of data and model representations to achieve the most desired state—precise correlation between the analysis performance parameter and evidence (Category F) in order to attain confidence in design simulations. For example, the route for the development of confidence in simulation for a current state of category C is to follow the path from C–E–F, which requires the company to collect more evidence regarding the performance of more similar systems to enable more precise characterisation of error. A systematic documentation of evidence collected over many design variants such as that proposed in the framework is suggested to assist the company to build a more robust and reliable simulation-based design environment. However, there are implications on resource commitment for the accumulation of design knowledge as illustrated in Fig. 3, which typically constrains the progression of confidence to achieve precise correlation of analytical performance and evidence.

5 Case studies

Twenty cases from the literature have been collected and presented in (Goh et al. 2005a) to substantiate and populate the (i, j, k) coordinate system model shown in Fig. 2. Although the number of cases represents a substantial population size, it only represents a partial population of the 48 coordinates (4 × 4 × 3) in the classification. These cases illustrated the rationale for the development of suitable error functions for each of the categories they represent and are included in Appendix I for reference. In addition, two more extensive case studies have been conducted by the authors as part of this research (Booker et al. 2004; Goh et al. 2005b). The framework and error functions are now illustrated with the two case studies.

5.1 Suspension dynamics case study

A case study on the suspension dynamics of a sports utility vehicle (Goh et al. 2005b) was conducted to compare analytical and experimental values of road loads transferred onto the chassis to gain an understanding of the systematic and variance errors arising from various data and model representations. The collaborating company was interested to establish the influence of statistical variability in component dimensions, properties and assembly factors onto the estimated loads from simulations, as well as the confidence in these predictions. In particular, if the company could establish sufficient confidence in simulation-based design, the use of intermediate prototypes could be reduced (especially in non-critical areas) thus saving product development time and cost. The correlation assessed from the current system was then used to judge the potential accuracy of modelling predictions for the estimation of load transfer in early design stage of a variant vehicle where experimentally measured response is not available. The characteristics relating to the framework and error functions for this case study are now established.

5.1.1 Classification

Performance parameters—the average and range of vertical top (suspension) mount force were predicted from two models, where
  • Design parameters were suspension component properties, with their statistical variations first assumed from published data and specifications, then improved with data measured from tests conducted.

  • Transfer functions were available to provide an analytical relationship between the design parameters and performance parameter of the function:
    1. (a)

      Computational model—MSC.ADAMS model (37 degree of freedom)

       
    2. (b)

      Analytical model—simplified model (1 degree of freedom).

       

Performance parameters from various data sets and models were described by normal PDFs, providing a probabilistic description of variability. This corresponds to = 3 in the scale for performance parameter.

Evidence for the top mount force was derived from a load history measured from a laboratory test on a prototype vehicle. Experimental or test evidence exists for the performance parameters from a single vehicle, but the actual properties (design parameters) of this test system are unknown. Testing requires very expensive hardware and data acquisition systems that typically cost automotive companies millions of dollars of investment per car tested and taking months to set up. Evidence for this case study was only available from a single vehicle, corresponding to = 1 in the scale for evidence.

The performance space for this case study contains only one correlation case for a specific suspension system design, but the collaborating company will have collected vast experience from designing variants of the vehicle type. However, the performance space considered in this case study involved only one solution, therefore implying = 1 in the scale for performance space.

5.1.2 Error functions

The top mount forces predicted from various combinations of data and model representations for a stationary 10s time frame are given in Fig. 4, along with the single experimental observation. Nominal values of the top mount force obtained from deterministic analyses are indicated by arrows in the same figure. The mathematical formulation of error functions for category C from Table 1 has been applied to this case study. The error functions required a combination of a Fuzzy number (fitted to the singly available evidence) and a normal PDF, and were developed based on the consistency principle (Eq. 9). The optimal possibility distribution function corresponding to a normal PDF is expressed as (Nikolaidis et al. 2004)
$$ \pi (x) = 2{\left[ {1 - \Phi \left. {\left| {\frac{{x - \mu }} {\sigma }} \right.} \right|} \right]} $$
(15)
where
Φ(·)

standard normal distribution function.

Possibility distributions for experimental and modelled top mount force are illustrated in Fig. 5. Owing to the lack of knowledge of the actual PDF in the experimental measurement of the top mount load, this single suspension system example could lie anywhere within the variability range described by its PDF. It is reasonable to assume that the uncertainty in the physical suspension system is similar to that predicted from its most accurate modelling representation—ADAMS model and measured variables. Pessimistically, this measured system can be a “lower bound” or an “upper bound” of this PDF in which the worst-case uncertainty can be expected. The experimental point is then fitted with a triangular possibility distribution with its peak equivalent to the measured value and its support equal to twice the width of the PDF, i.e. 2(±3σ). The possibility distribution can be interpreted as the degree of possibility with its peak representing absolute possibility and values beyond its support having no possibility of occurrence. The error functions computed for various models and data representations for the prediction of average top mount force are tabulated in Table 2 for several α-cuts.
Fig. 4

Correlations between predicted top mount force and experimental measurement

Fig. 5

Possibility distributions for experimental and predicted top mount force

Table 2

Error functions for suspension analysis case study

Models and data/error functions

α-Cut

EF1 (φ)

EF2 (ε)

ADAMS with measured variables

0.0001

−0.07

1.54

0.25

3.91

0.5

4.45

0.75

4.71

1

ADAMS with assumed variables

0.0001

−0.08

1.80

0.25

4.56

0.5

5.18

0.75

5.49

1

Simplified with measured variables

0.0001

−0.12

0.86

0.25

2.17

0.5

2.47

0.75

2.61

1

Simplified with assumed variables

0.0001

−0.13

1.01

0.25

2.56

0.5

2.91

0.75

3.08

1

Observation of the error functions formulated for various model and data representations for the prediction of top mount force indicates that varying scales of systematic and random uncertainty exist. The systematic and random uncertainties are separately characterised by two components of the EFs. This way, distinction between the two types of uncertainties is made. For instance, suspension dynamics modelling is found to over-estimate the average vertical top mount force, observed in EF1 being consistently negative. Due to the lack of variability information, there is greater imprecision in the experimental system with respect to its second moment. The supports (width of possibility distributions at α ≈ 0) obtained for normal distributions are wide as their limits extend to ±∞. Therefore, EF2s are mainly greater than unity (under-estimation) except near the supports of the simplified model. The error functions so proposed maintain the more conservative uncertainty axiom in the correlation (Fuzzy in this case due to imprecision in experimental parameter) so that extra information content was not added. Nevertheless, they inform the systematic and random components of uncertainty associated with the modelling results that is consistent with the interpretation of possibility theory.

5.2 Shrink-fit failure case study

A case study on the failure of a shrink-fit assembly subjected to torsional loading was conducted (Booker et al. 2004). The case study is a typical engineering problem and is of less complexity compared to the suspension analysis, but allows for experimental data collection to be carried out more extensively to describe the design and performance parameters. The failure mechanics was modelled using traditional design formulae and a relatively new micro-mechanical approach to compare the modelling and computational errors to experimental measurements of a statistical sample. The comparison between modelling results was extended to explore the characterisation of error functions within a performance space.

5.2.1 Classification

Performance parameters—contact pressure and holding torque were estimated from conventional and new models, where

  • Design parameters are the dimensional, material and frictional properties of the shaft and hub components. Variability in these parameters was derived from dedicated tests on statistical samples.

  • Transfer functions for the failure mechanism involve mechanics of material and contact mechanics theories.

Performance parameters were characterised by PDFs and their equivalent normal PDFs. This means a precise probabilistic description was available, and implies = 3 in the scale for performance parameters.

Evidence for the holding torque was obtained by experimentally testing a statistical sample of shrink-fit assemblies. Evidence was also precisely described by a PDF in this case study, corresponding to the statistical scale point, = 3 in the scale for evidence.

Performance space in this case study consisted of one correlation case between analytical and experimental results, but correlation between the design formula and the micro-mechanical model was extended to a large number of fit dimensions. The performance space consisted of correlation between analytical results and the experiments for one nominal set of dimensions, = 1 in the scale for performance space, but correlation for large number of variants (= 3) between micro-mechanical models and conventional design formula if secondary evidence is considered.

5.2.2 Error functions

The error functions for this case study are more straightforward (category F). The category also represents the most complete state of knowledge regarding uncertainty in correlation. The equivalent normal distributions of holding torque from various models and experiments are presented in Fig. 6. Results of error functions for various models in this case study are computed and presented in Table 3. It is evident that all models consistently under-estimate the experimental holding torque, observed in EF1 being greater than zero. The design formula with surface roughness included in the formulation has the greatest systematic discrepancy from experimental measurement (126%), whereas the original micro-mechanical approach yields the best correlation against experimental evidence (15%). Discrepancies in variance (EF2) are also observed, but the micro-mechanical method with an empirical coefficient greatly over-estimates variability. Owing to precise understanding of uncertainty in this category, the error functions accurately characterise both the systematic and random uncertainty that arise in the modelling process that is consistent with the probability theory.
Fig. 6

Equivalent normal distributions of holding torque from various models and experiments

Table 3

Error functions for shrink-fit case study

Models/error functions

EF1 (φ)

EF2 (ε)

Design formula with surface roughness

1.26

1.09

Design formula without surface roughness

0.48

0.80

Micro-mechanical original

0.15

0.98

Micro-mechanical empirical

0.21

0.53

6 Discussion

6.1 Application of the framework in design

Uncertainty characterisation using error functions is most suited to the third scale in the performance space classification in the framework (= 3), i.e. design with a large number of solutions in the performance space where products are varied from existing ones over many design cycles. This type of information is typically available in designs which are variations or adaptations of existing designs (Fajdiga et al. 1996). Since these design types constitute about 80% of engineering products (Pahl et al. 2007), many companies can benefit from better utilisation of information and knowledge obtained through prototype tests or lessons learnt from past failures. The extensive experience and knowledge accumulated from a large number of design variants could allow for very useful inference of the accuracy of modelling or simulation techniques to build up a reliable knowledge-based system. The error functions can be stored along with the data, model and load cases, and retrieved for reuse in a similar design case.

6.1.1 Similarity measures

The feasibility of an information system using a knowledge repository for an engineering model has been investigated by other researchers (Mocko et al. 2004). The importance of “inaccuracy” in these developments was also recognised and the work presented here is of conceptual relevance. Such error functions might be used to estimate the risks and uncertainties inherent in an application of analysis/simulation, and might allow identification of where resources should be expended to reduce error. Typically, evaluation techniques available in the early product development process do not consider the effects of systematic and stochastic variability during production or end-use of the design (Kazmer and Roser 1999). Therefore, if error functions for the analytical relationships from a similar product are available, designers can estimate the uncertainty and variability associated with the analysis. Parametric error models may be built and applied in early estimation to correct for the discrepancy but allow simpler models to be used. In case of continuous performance space, the error models are continuous functions of key design parameters and may be interpolated between those points. In case of discrete performance space, the error models consist of sets of discrete performance and design parameters. In both cases, mechanisms for inferring similarity between variants can be established either theoretically or empirically (Pahl et al. 2007).

6.1.2 Utility and decision metrics

The utility of error functions has to be interpreted in the context of the design problems. As these performance parameters are the basis for judging the adequacy of a design against the objectives/requirements, any errors in the estimation may move the design point away from the optimum. For a performance that is sensitive to spread and tail distribution, e.g. reliability, errors in the estimation of second moment (EF2) are dominant. In this situation, minimising susceptibility to this type of error may be of primary importance to designers owing to the potential risks of errors, increased costs and rework that may entail. On the other hand, for assessing a design performance against a specified target, models with lowest first moment error (EF1) are generally preferred. Typically, a model that tends to conservative estimate was acceptable but this design philosophy is not suitable in designing efficient modern systems. In particular, an over-conservative model is not preferred when variability in the design parameters can be better accounted for probabilistically.

In the shrink-fit design case study (refer to Table 3), the empirical micro-mechanical model, although providing a reasonable estimation of the mean, also results in serious over-estimation of the variability. If the model was used in designing shrink-fits, the designer is likely to specify much tighter tolerance or may resort to selective assembly for quality control purposes, thus increasing manufacturing and assembly costs unnecessarily. On the other hand, the design formula with surface roughness model under-estimates holding torque significantly, potentially resulting in uneconomic over-design. For instance, for a specified torque transmission requirement, shrink-fits may be designed with higher quality materials, larger nominal sizes etc. causing material wastage and additional weight. In the case study, the original micro-mechanical model results in the lowest errors and is preferred to design formula providing the increased modelling detail is not cost-prohibitive. In more regular design settings, designers may be required to make more difficult trade-offs and compromises. With respect to this, utility and decision metrics taking account of cost and time of using the different models can also be developed. These metrics for model selection may be weighted against the modelling issues, the design context and the user preferences.

6.2 Limitations and challenges

There are some limitations and challenges to the characterisation of uncertainty using error functions. The knowledge base for the error functions may require several years over many product variants to establish its validity and reliability before companies can benefit from its incorporation in future design applications to assist decision-making. In this respect construction of the knowledge base would be very much assisted by a more systematic structuring of engineering data and by the application of techniques such as formal information management and decisions-support systems. It is envisaged that the construction is also best done as a collaborative activity, for example, concentrating in a particular industry.

Without doubt, the introduction of any new design methodology will require commitment of time and cost from industry. As such, the application of error functions may be constrained to cases where these issues are not restrictive. Due to market pressure to reduce prototype testing in the product development cycle, resources dedicated for improving confidence in virtual simulations can be more justifiable (Shephard et al. 2004). Besides, the culture inherent in a company is often difficult to change, but change may still take place slowly with some resistance. It is acknowledged that any incorporation of new methods should be done with minimum modification to existing design tools and procedures to ensure acceptance in industry (Frost 1999). Furthermore, engineers often need to comply with legislation and regulations including health and safety, ethical and environment issues and considerations. Standards and best practice codes for example, are expected to be adhered to for economical or safety reasons due to the lack of confidence in engineering analysis, for example in fracture mechanics as discussed by De Castro and Fernandes (2004). Therefore, the technique described in this paper may be of less relevance to safety critical industries like nuclear engineering and aerospace that may need to abide by strict physical testing requirements. It is envisaged however, that high volume and customer-sensitive industries, e.g. automotive sector can potentially benefit from such approach.

7 Concluding remarks

This paper has presented a knowledge management approach to aid uncertainty management in product performance evaluation using analytical and simulation tools. A framework for the systematic organisation of the understanding of uncertainty in product development and in simulation procedures has been presented and substantiated with case studies from different design domains involving varying degrees of uncertainty in the simulation data and varying quantities of evidence from test or service performance. Informed by these case studies, a classification is devised based on the organisation of knowledge regarding the disparity between analytical and experimental evidence. This classification is used to identify the most appropriate method for the representation of error, including conventional probabilistic design techniques, interval methods and methods based on Fuzzy set theory. The error functions have been developed by isolating systematic and stochastic uncertainty arising from simulation processes. The incorporation of error functions into the modelling of design processes is suggested to aid analysis strategy and to identify the progression of confidence to achieve reliable virtual product evaluations. The improvement of confidence in simulation-based design environments through management of knowledge gained from previous design activities and from in-service experience with products will aid decision-making in future design applications. It is intended that the framework may provide first steps towards developing conceptually more robust information and decision-support systems incorporating uncertainty theories and formalisms.

As uncertainty evolves from imprecise to precise variables as design knowledge is accumulated, design tools and techniques need to be adapted for handling the various types of uncertainty. It is acknowledged that probability-based methods are the most reliable for propagating uncertainty at the downstream end of the design process. The application of probabilistic methods has generally improved the understanding of variability in loading, geometry and material properties, but, coupled with the understanding of model uncertainty, the methods will provide a powerful tool for more complete characterisation of uncertainty in engineering analyses. For instance, the precise understanding of the errors between models and experiments can only be facilitated by probabilistic methods taking into consideration the variability that is inherent in the process and design parameters. Improvement in the understanding of uncertainty and limitations in design analysis can minimise errors and increase the opportunity for designing more competitive products. Further work involves the collection of a wider range of design exemplars to substantiate the framework and to further evaluate requirements for the error functions. Additionally, contributions towards education and training, minimisation of modelling and computational intensity and improvement to data management can facilitate industrial implementation of the method proposed. The authors contend that educational awareness and technology progression towards more sophisticated methods for handling uncertainty in the engineering design is improving.

Footnotes

  1. 1.

    The support is a real set of all values of a Fuzzy set where the membership is greater than zero.

Notes

Acknowledgments

This paper reports research conducted by Dr. Goh while a postgraduate student at the University of Bristol, supported by a Postgraduate Scholarship from that University and by the UK Overseas Research Students Award Scheme (ORSAS). This support is gratefully acknowledged, as is assistance from Dr. J. Devlukia and Mr. A. D’Cruz of Land Rover with one of the case studies reported here.

References

  1. Apeland S, Aven T, Nilsen T (2002) Quantifying uncertainty under a predictive, epistemic approach to risk analysis. Reliab Eng Syst Saf 75:93–102CrossRefGoogle Scholar
  2. Ayyub BM, de Souza GFM (2000) Reliability-based methodology for life prediction of ship structures. In: Ship Structure Symposium, WashingtonGoogle Scholar
  3. Barton RR, Limayen F, Meckesheimer M, Yannou B (1999) Using metamodels for modelling propagation of design uncertainties. In: Proceedings of ICE’99, The Hague, Netherlands, pp 521–528Google Scholar
  4. Booker JD, Truman CE, Wittig S, Mohammed ZA (2004) A comparison of shrink-fit holding torque using probabilistic, micro-mechanical and experimental approaches. Proc IMechE Part B J Eng Manuf 218:175–187Google Scholar
  5. Box G, Draper N (1987) Empirical model-building and response surfaces. Wiley, New YorkMATHGoogle Scholar
  6. Bucher C, Bourgund U (1990) A fast and efficient response surface approach for structural reliability problems. Struct Saf 7:57–66CrossRefGoogle Scholar
  7. Bury KV (1999) Statistical distributions in engineering. Cambridge University Press, New YorkGoogle Scholar
  8. Cooke R (2004) The anatomy of the Squizzel: the role of operational definitions in representing uncertainty. Reliab Eng Syst Saf 85(1–3):313–319CrossRefGoogle Scholar
  9. Cornell CA (1969) A probability-based structural code. J Am Concr Inst 66(12):974–985Google Scholar
  10. De Castro PMST, Fernandes AA (2004) Methodologies for failure analysis: a critical survey. Mater Des 25:117–123Google Scholar
  11. de Neufville R (2004) Uncertainty management for engineering systems planning and design. MIT Engineering Systems Division, pp 1–18Google Scholar
  12. Du X, Chen W (2000) Methodology for managing the effect of uncertainty in simulation-based design. AIAA J 38(8):1471–1478Google Scholar
  13. Dubois D, Foulloy L, Mauris G, Prade H (2004) Probability–possibility transformations, triangular Fuzzy sets, and probabilistic inequalities. Reliab Comput 10:273–297MATHCrossRefMathSciNetGoogle Scholar
  14. Fajdiga M, Jurejevcic T, Kernc J (1996) Reliability prediction in early phases of product design. J Eng Des 7(2):107–128CrossRefGoogle Scholar
  15. Ferson S, Kreinovich V, Ginzburg L, Myers D, Sentz K (2002) Constructing probability boxes and Dempster-Shafer Structures. Sandia National Laboratory, Albuquerque, New MexicoGoogle Scholar
  16. Ferson S, Joslyn CA, Helton JC, Oberkampf WL, Sentz K (2004) Summary from the epistemic uncertainty workshop: consensus amid diversity. Reliab Eng Syst Saf 85:355–369CrossRefGoogle Scholar
  17. Frost RB (1999) Why does industry ignore design science? J Eng Des 10(4):301–304CrossRefGoogle Scholar
  18. Giachetti RE, Young RE, Roggatz A, Eversheim W, Perrone G (1997) A methodology for the reduction of imprecision in the engineering process. Eur J Oper Res 100(2):277–292MATHCrossRefGoogle Scholar
  19. Goh YM, Booker JD, McMahon CA (2005a) A framework for the handling of uncertainty in engineering knowledge management to aid product development. In: Proceedings of ICED, Melbourne, AustraliaGoogle Scholar
  20. Goh YM, Booker JD, McMahon CA (2005b) Uncertainty modelling of a suspension unit. Proc IMechE Part D J Automob Eng 219(6):755–771Google Scholar
  21. Gong S (2006) Discussion of the design philosophy and modified non-expert fuzzy set model for better product design. J Eng Des 17(6):533–548CrossRefGoogle Scholar
  22. Hans-Jurgen S (2002) Uncertainty in engineering design. Syst Sci 28(2):5–13Google Scholar
  23. Haugen EB (1980) Probabilistic mechanical design. Wiley, New YorkGoogle Scholar
  24. Honeywell T (2001) Drive to cut prototyping. Prof Eng 14(23):46Google Scholar
  25. Kazmer D, Roser C (1999) Evaluation of product and process design robustness. Res Eng Des 11:20–30CrossRefGoogle Scholar
  26. Kleijnen JPC (1995) Verification and validation of simulation models. Eur J Oper Res 82:145–162MATHCrossRefGoogle Scholar
  27. Klir GJ, Smith RM (2001) On measuring uncertainty and uncertainty-based information: recent developments. Ann Math Artif Intell 32:5–33CrossRefMathSciNetGoogle Scholar
  28. Langley RS (2000) Unified approach to probabilistic and possibilistic analysis of uncertain systems. J Eng Mech 126:1163–1172CrossRefGoogle Scholar
  29. Laskey KB (1996) Model uncertainty: theory and practical implications. IEEE Trans Syst Man Cybern A Syst Hum 26(3):340–348CrossRefGoogle Scholar
  30. Mavris D, deLaurentis D (2000) A probabilistic approach for examining aircraft concept feasibility and viability. Aircr Des 3:79–101CrossRefGoogle Scholar
  31. McAdams DA, Wood KL (2002) A quantitative similarity metric for design-by-analogy. J Mech Des 124(2):173–182CrossRefGoogle Scholar
  32. McCalley RB (1957) Nomogram for selection of safety factors. Design NewsGoogle Scholar
  33. McKay MD, Beckman RJ, Conover WJ (1979) A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21:239–245MATHCrossRefMathSciNetGoogle Scholar
  34. Mischke CR (1970) A method of relating factor of safety and reliability. J Eng Ind 537–542Google Scholar
  35. Mocko G, Malak R, Paredis C, Peak R (2004) A knowledge repository for behavioral models in engineering design. In: Proceedings of ASME DETC’04, Salt Lake City, UtahGoogle Scholar
  36. Modares M, Mullen R, Muhanna R, Zhang H (2004) Buckling analysis of structures with uncertain properties and loads using an interval finite element method. In: The NSF workshop on reliable engineering computing, Savannah, GeorgiaGoogle Scholar
  37. Moens D, Vandepitte D (2004) Non-probabilistic approaches for non-deterministic dynamic FE analysis of imprecisely defined structures. In: Proceedings of international conference on noise and vibration engineering, Leuven, BelgiumGoogle Scholar
  38. Möller B, Graf W, Beer M (2000) Fuzzy structural analysis using α-level optimization. Comput Mech 26:547–565MATHCrossRefGoogle Scholar
  39. Moore RE (1966) Interval analysis. Prentice-Hall, New JerseyMATHGoogle Scholar
  40. NAFEMS (2005) NAFEMS Technical BenchmarkGoogle Scholar
  41. Nikolaidis E, Chen S, Cudney H, Haftka R, Rosca R (2004) Comparison of probability and possibility for design against catastrophic failure under uncertainty. Trans ASME 126:386–394CrossRefGoogle Scholar
  42. Nilsen T, Aven T (2003) Models and model uncertainty in the context of risk analysis. Reliab Eng Syst Saf 79:309–317CrossRefGoogle Scholar
  43. O’Hagan A (1998) Eliciting expert beliefs in substantial practical applications. Statistician 47(1):21–35MathSciNetGoogle Scholar
  44. Ohsuga S (1989) Toward intelligent CAD systems. Comput Aided Des 21(5):317–337CrossRefGoogle Scholar
  45. Pahl G, Beitz W, Feldhusen J, Grote KH (2007) Engineering design: a systematic approach, 3rd edn. Ken Wallace and Lucienne Blessing, Springer Limited, LondonGoogle Scholar
  46. Parson S, Hunter A (1998) A review of uncertainty handling formalisms. Applications of uncertainty formalisms, pp 8–37Google Scholar
  47. Pugsley AG (1966) The safety of structures. Edward Arnold (Publishers) Ltd., LondonGoogle Scholar
  48. Rao SS, Berke L (1997) Analysis of uncertain structural systems using interval analysis. AIAA J 35(4):727–735MATHCrossRefGoogle Scholar
  49. Rao SS, Cao L (2002) Optimum design of mechanical systems involving interval parameters. J Mech Des 124:465–472CrossRefGoogle Scholar
  50. Raufaste E, da Silva Neves R, Mariné C (2003) Testing the descriptive validity of possibility theory in human judgments of uncertainty. Artif Intell 148(1–2):197–218MATHCrossRefGoogle Scholar
  51. Rekuc SJ, Aughenbaugh JM, Bruns M, Paredis CJJ (2006) Eliminating design alternatives based on imprecise information. In: SAE 2006 world congress and exhibition, Detroit, MIGoogle Scholar
  52. Riha D, Thacker B, Enright M, Huyse L, Fitch S (2002) Recent advances of the NESSUS probabilistic analysis software for engineering applications. In: 42nd structures, structural dynamics, and materials (SDM), Denver, ColoradoGoogle Scholar
  53. Sandri SA, Dubois D, Kalfsbeek HW (1995) Elicitation, assessment, and pooling of expert judgments using possibility theory. IEEE Trans Fuzzy Syst 3(3):313–335CrossRefGoogle Scholar
  54. Sargent R (1998) Verification and validation of simulation models. In: 30th winter simulation conference, WashingtonGoogle Scholar
  55. Shephard M, Beall M, O’Bara R, Webster R (2004) Toward simulation-based design. Finite Elem Anal Des 40:1575–1598CrossRefGoogle Scholar
  56. Shigley JE, Mischke CR (1989) Mechanical engineering design, 5th edn. McGraw-Hill, SingaporeGoogle Scholar
  57. Stroud WJ, Krishnamurthy T, Smith SA (2001) Probabilistic and possibilistic analyses of the strength of a bonded joint. In: 42nd AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, Seattle, WA: AIAA 2001–1238Google Scholar
  58. Suh NP (2001) Axiomatic design advances and applications. Oxford University Press, New YorkGoogle Scholar
  59. Tzannetakis N, Donders S, van der Peer J, Weal P (2004) A system approach to simulation-based design under uncertainty, through best in class simulation process integration and design optimization. In: DETC’04, Salt Lake City, UtahGoogle Scholar
  60. Ugarte I, Sanchez P (2003) Using modified interval analysis in system verification. In: XVIII conference on design of circuits and integrated circuits DCIS’03, Ciudad Real, SpainGoogle Scholar
  61. Venegas LV, Labib AW (2005) Fuzzy approaches to evaluation in engineering design. Trans ASME 127:24–33CrossRefGoogle Scholar
  62. Wood KL, Antonsson EK, Beck JL (1990) Representing imprecision in engineering design: comparing fuzzy and probability calculus. Res Eng Des 1:187–203CrossRefGoogle Scholar
  63. Wu YT, Wirsching PH (1987) New algorithm for structural reliability estimation. J Eng Mech 113:1319–1336CrossRefGoogle Scholar
  64. Zadeh LA (1975) The concept of a linguistic variable and its application to approximate reasoning. Inf Sci 8:199–249CrossRefMathSciNetGoogle Scholar
  65. Zadeh LA (1978) Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst 1:3–28MATHCrossRefMathSciNetGoogle Scholar
  66. Zio E, Apostolakis GE (1996) Two methods for the structured assessment of model uncertainty by experts in performance assessments of radioactive waste repositories. Reliab Eng Syst Saf 54:225–241CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2007

Authors and Affiliations

  1. 1.Department of Mechanical EngineeringUniversity of BathBathUK
  2. 2.Department of Mechanical EngineeringUniversity of BristolBristolUK

Personalised recommendations