Abstract

InfoGap (IG) Decision Theory, introduced in Chap. 5, is used to formulate, and solve, the analysis of robustness in the early design of a mechanical latch.

The three components necessary to assess robustness of the latch design (a system model, performance requirement, and representation of uncertainty) are discussed.

The robustness analysis indicates that the nominal design can accommodate significant uncertainty before failing to deliver the required performance.

The discussion concludes with the assessment of a variant design to show how a decision (“which design should be chosen?”) can be confidently reached despite the presence of significant gaps in knowledge.
Download chapter PDF
1 Introduction
The role of numerical simulation to aid in decisionmaking has grown significantly in the past three decades for a diverse number of applications, such as financial modeling, weather forecasting, and design prototyping (Oden et al. 2006). Despite its widespread use, numerical simulations suffer from unavoidable sources of uncertainty, such as making simplifying assumptions to represent nonidealized behavior, unknown initial conditions, and variability in environmental conditions. The question posed herein is: How can simulations support a confident decision, given their inherent sources of uncertainty? This chapter gives an answer to this question using InfoGap (IG) Decision Theory, the philosophy of which is presented in Chap. 5.
The main factors in selecting an approach to evaluate the effect of uncertainty on a decision are (1) the nature and severity of the uncertainty, and (2) computational feasibility. In cases of deep uncertainty, the development of a probability distribution cannot confidently be made. Further, uncertainty can often be unbounded and decisionmakers will need to understand how much uncertainty they are accepting when they proceed with a decision. A few examples of comparing methods can be found in Hall et al. (2012) and BenHaim et al. (2009). Hall et al. (2012) compare IG with RDM (Chap. 2). BenHaim et al. (2009) compare IG with robust Bayes (a min–max approach with probability bounds known as Pboxes, and coherent lower previsions). In all cases, the major distinction is the prior knowledge, or how much information (theoretical understanding, physical observations, numerical models, expert judgment, etc.) is available, and what the analyst is willing to assume if this information does not sufficiently constrain the formulation of the decisionmaking problem. One can generally characterize IG as a methodology that demands less prior information than other methods in most situations. Furthermore, Pboxes can be viewed as a special case of IG analysis (Ferson and Tucker 2008), which suggests that these two approaches may be integrated to support probabilistic reasoning. The flexibility promoted by IG, however, also makes it possible to conveniently analyze a problem for which probability distributions would be unknown, or at least uncertain, and when a realistic and meaningful worst case cannot be reliably identified (BenHaim 2006).
Applying IG to support confident decisionmaking using simulations hinges on the ability to establish the robustness of the forecast (or predicted) performance to modeling assumptions and sources of uncertainty. Robustness, in this context, means that the performance requirement is met even if some of the modeling assumptions happen to be incorrect. In practice, this is achieved by exercising the simulation model to explore uncertainty spaces that represent gaps in knowledge—that is, the “difference” between bestknown assumptions and how reality could potentially deviate from them. An analysis of robustness then seeks to establish that performance remains acceptable within these uncertainty spaces. Some requirementsatisfying decisions will tolerate more deviation from bestknown assumptions than others. Given two decisions offering similar attributes (feasibility, safety, cost, etc.), preference should always be given to the more robust one—that is, the solution that tolerates more uncertainty without endangering performance.
As discussed in Chap. 5, three components are needed to carry out an IG analysis: A system model, a performance requirement, and a representation of uncertainty. This chapter shows how to develop these components to assess robustness for the performance of a mechanical latch in an early phase of design prototyping. IG is particularly suitable for design prototyping, because it offers the advantages of accommodating minimal assumptions while communicating the results of an analysis efficiently through a robustness function.
Our discussion starts in Sect. 10.2 by addressing how the three components of an infogap analysis (system model, performance requirement, and uncertainty model) can be formulated for applications that involve policy topics. The purpose is to emphasize that infogap applies to a wide range of contexts, not just those from computational physics and engineering grounded in firstprinciple equations. Section 10.3 introduces our simple mechanical example—the design of a latch, and its desired performance requirements given that the geometry, material properties, and loading conditions are partially unknown. The section also discusses the system model defined to represent the design problem. Details of the simulation (how the geometry is simplified, how some of the modeling assumptions are justified, how truncation errors are controlled, etc.) are omitted, since they are not essential to understand how an analysis of robustness is carried out.
Section 10.4 discusses the main sources of uncertainty in the latch design problem, how they are represented with an infogap model of uncertainty, and the implementation of robustness analysis. Two competing latch designs are evaluated in Sect. 10.5 to illustrate how confident decisions can be reached despite the presence of significant gaps in knowledge. A summary of the main points made is provided in Sect. 10.6.
2 Application of InfoGap Robustness for Policymaking
Our application suggests how infogap robustness (see Chap. 5) can be used to manage uncertainty in the earlystage design of a latch mechanism (Sects. 10.3 and 10.4) and how robustness functions may be exploited to support decisionmaking (Sect. 10.5). The reader should not be misled, however, in believing that this methodology applies only to contexts described by firstprinciple equations such as the conservation of momentum (also known as Newton’s 2nd law, “Force = Mass × Acceleration”) solved for the latch example. The discussion presented here emphasizes the versatility of IG robustness for other contexts, particularly those involving policy topics for which wellaccepted scientific models might be lacking.
Consider two highconsequence policymaking applications:

Climate change: Policy decisions to address the impact on human activity of changes in the global climate (and vice versa) tend to follow either the precautionary principle or the scientific design of intervention. In the first case (precautionary principle), decisionmakers would err on the side of early intervention to mitigate the potentially adverse consequences of climate change, even if the scientific understanding of what causes these changes, and what the consequences might be, is lacking. In the second case (scientific design of intervention), longerrange planning is implied while a stronger emphasis would be placed on gaining a better scientific understanding and reducing the sources of uncertainty before policy is enacted. In the presence of incomplete understanding of the phenomena that drive changes in the global climate, effects on the planet’s ecosystem, and potential consequences for human activity, it is unclear which early intervention strategies to adopt and how effective they might be. Given scientific uncertainty, however, policymakers are inclined to adopt early precautionary intervention.

Longrange infrastructure planning: The world population is increasingly concentrated in urban areas. The ten largest urban areas, such as Tokyo (Japan), Jakarta (Indonesia), Delhi (India), and New York City (USA), feature population densities between 2,000 and 12,000 individuals per km^{2}, thus exceeding densities in rural areas by more than two to three ordersofmagnitude. Managing these population centers offers serious challenges in terms of housing, transportation, water and power supplies, access to nutrition, waste management, and many other critical systems. Future infrastructure needs to be planned for many decades in the presence of significant uncertainty regarding population and economic growth, urbanization laws, and the adoption of future technologies. The development, for example, of peertopeer transportation systems might render it necessary to rethink how conventional public transportation networks and taxi services are organized. The challenge of infrastructure planning is to design sufficient flexibility in these interconnected engineered systems when some of the factors influencing them, together with the performance requirements themselves, might be partially unknown.
The challenge of policymaking for these and similar problems is, of course, how to manage the uncertainty. These applications often involve incomplete scientific understanding of the processes involved, elements of extrapolation or forecasting beyond known or tested conditions, and aspects of the decisionmaking practice that are not amenable to being formulated with mathematical models. Infogap robustness, nevertheless, makes it possible to assess whether a policy decision would deliver the desired outcomes even if the definition of the problem features significant uncertainty and some of the assumptions formulated in the analysis are incorrect.
Consider, for example, climate change. Developing a framework to support policymaking might start with a scientific description of how the oceans, atmosphere, and ice caps interact. Figure 10.1 illustrates a satellite measurement of sealevel variability (left) compared to the prediction obtained with a global circulation model (right). The latter is based on historical data and observations made in the recent past that are extrapolated to portray the conditions observed by the satellite over a similar period. Smith and Gent (2002) describe the physicsbased models solved numerically to describe this phenomenology.
Even though simulations such as Fig. 10.1 are grounded in firstprinciple descriptions, they are not immune to uncertainty. Executing this calculation with a onedegree resolution (i.e., 360 grid points around the Earth), for example, implies that some of the computational zones are as large as 314 km^{2} near the equator, which is nearly five times the surface area of Washington, D.C. It raises the question of whether localized eddies that contribute to phenomena such as the Atlantic Ocean’s Gulf Stream are appropriately represented. Beyond the question of adequacy, settings such as resolution, fidelity with which various processes are described, and convergence of numerical solvers generate numerical uncertainty. These imply that code predictions could differ, maybe significantly, from the “truebutunknown” conditions that analysts seek to know.
Other commonly encountered sources of uncertainty in firstprinciple simulations include the variability or incomplete knowledge of initial conditions, boundary conditions, constitutive properties (material properties, reactive chemistry, etc.), and source functions (e.g., how much greenhouse gas is introduced into the atmosphere?). This is in addition to not always understanding how different processes might be coupled (e.g., how does the chemistry of the ocean change due to increased acidity of the atmosphere?). Modelform uncertainty, which refers to the fact that the functional form of a model might be unknown, is also pervasive in computational sciences. An example would be to select a mathematical equation to represent the behavior of a chemical at conditions that extrapolate beyond what can be experimentally tested in a laboratory. Finally, largescale simulation endeavors often require passing information across different code platforms. Such linkages can introduce additional uncertainty, depending on how the variables solved for in one code are mapped to initialize the calculation in another code.
The aforementioned sources of uncertainty, while they are multifaceted in nature and can be severe, are handled appropriately by a number of wellestablished methods, such as statistical sampling (Metropolis and Ulam 1949), probabilistic reliability (Wu 1994), worstcase analysis, and IG robustness. In the last case, the system model is the simulation flow that converts input settings to performance outcomes. The performance requirement defines a single criterion or multiple criteria that separate success from failure. The uncertainty model describes the sources of variability and lackofknowledge introduced by the simulation flow. Once the three components are defined, a solution procedure is implemented to estimate the robustness function of a given decision. Competing decisions can then be assessed by their ability to meet the performance requirement. Likewise, the confidence placed in a decision is indicated by the degree to which its forecasted performance is robust, or insensitive, to increasing levels of uncertainty in the formulation of the problem. Regardless of how sophisticated a simulation flow might be, IG analysis always follows this generic procedure, as is discussed in Sects. 10.3 and 10.4 for the latch application.
Eventually, information generated from firstprinciple models yields indicators that need to be combined with “soft” data to support policy decisions. Figure 10.2 (Bamber et al. 2009) is a notional illustration where sea levels (left) predicted over Western Europe would be combined with population densities (right) to assess how to mitigate the potentially adverse consequences of rising waters. In this example, one source of information (sea levels) comes from a physicsbased global circulation model, while the other one (projected population levels) represents “softer” data, since the future population levels need to be extrapolated from spatially and temporally sparse census data. As one moves away from sciencebased modeling and simulation, the data, opinions, and other considerations integrated to support policy decisions contribute sources of uncertainty of their own. This uncertainty might also get amplified from extrapolating to conditions that have not been previously observed, or when forecasts are projected into the future. Figure 10.3 suggests what happens, for example, to forecasts of the world’s population (United Nations 2014).
IG robustness makes it possible to assess the extent to which a policy decision is affected by what may be unknown, even in the presence of sources of uncertainty that do not lend themselves to parametric representations such as probability distributions, polynomial chaos expansions, or intervals. Accounting for an uncertainty such as the gray region of Fig. 10.3 is challenging if a functional form is lacking. One might not know if the world’s population can be modeled as increasing or decreasing, or even if the trend can be portrayed as monotonic.
Figure 10.4 suggests one possibility to handle this challenge, whereby increasing levels of uncertainty, as indicated by the uncertainty spaces U(α_{1}) (blue region) and U(α_{2}) (green region), are defined around a nominal trend (reddashed line). The uncertainty can be explored by selecting population values within these sets and without necessarily having to formulate a parametric representation (e.g., “population growth is exponential”) if policymakers are not willing to postulate such an assumption. The figure illustrates two values chosen in set U(α_{1}) at year Y_{1}, and three values selected in the largeruncertainty set U(α_{2}) at year Y_{2}. This procedure would typically be implemented to assess if the policy objective is met as future populations deviate from the nominal trend in unknown ways.
Another type of uncertainty, often encountered in the formulation of policy problems and which lends itself naturally to infogap analysis, is qualitative information or expert opinions that introduce vagueness or nonspecificity. For example, one might state from Fig. 10.3 that “World population is growing”, without characterizing this trend with a mathematical equation. Policymakers might seek to explore if decisions they consider can accommodate this type of uncertainty while delivering the desired outcomes. The components of such an analysis would be similar to those previously discussed. A system model is needed to analyze the consequences of specific conditions, such as “the population is growing” or “the population is receding”, and a performance requirement is formulated to separate success from failure. The uncertainty model would, in this case, include alternative statements (e.g., “the population is growing” or “the population is growing faster”) to assess if the decision meets the performance requirement given such an uncertainty.
3 Formulation for the Design of a Mechanical Latch
To provide a simple example of applying IG, we illustrate its use in the design of a latch for the compartment door of a consumer electronics product conceptually illustrated in Fig. 10.5. The objective of the design is to ensure proper opening and closing of the door. The challenge is that the geometry of the door, material properties, and loading conditions are not precisely known, which is common in an early design phase. Establishing that the performance of a given design is robust to these gaps in knowledge, as discussed in Sect. 10.4, demonstrates that the requirement can be met in the presence of potentially deep uncertainty. The decisionmaker or customer can rely on this information to appreciate the relative merits of different design decisions.
The first step is to define the problem, its loading scenario, and decision criterion. Expert judgment suggests that the analysis be focused on stresses generated in the latch when opening and closing the door. Figure 10.6 indicates the latch component considered for analysis. For simplicity, the rest of the compartment is ignored, the contact condition is idealized, and severe loading conditions, such as those produced when dropping the device on the ground, are not modeled. Likewise, nonelastic deformation, plasticity, damage, and failure mechanics are not considered. A linear, isotropic, and homogeneous material model is specified. Such a model is useful to characterize the performance of the nominal design as long as one understands that the final material selected for manufacturing could exhibit characteristics that deviate significantly from this baseline. Assessing performance solely based on these nominal properties is, therefore, not a sound design strategy. It is necessary to assess the robustness of that performance to changes in the material (and other) properties.
The geometry of the latch shown on the right of Fig. 10.6 is simplified by converting the round corners to straight edges. This results in dimensions of 3.9 mm (length) by 4.0 mm (width) by 0.8 mm (thickness). The latch’s head, to which the contact displacement is applied, protrudes 0.4 mm above the surface. A perfectly rigid attachment to the compartment door is assumed, which makes it possible to neglect the door altogether and greatly simplifies the implementation.
A loading scenario also needs to be specified to carry out the analysis. It is assumed that the latch locks securely in place in the compartment’s receptacle to close the door after deflecting by a specified amount. Given the simplification of the geometry considered, this condition can be analyzed by applying a displacement whose nominal value is:
The nominal value, however, is only a best estimate obtained from the manufacturing of similar devices, and it is desirable to guarantee that the latch can meet its performance requirement given the application of contact displacements different from 0.50 mm (either smaller or greater).
Finally, a performance requirement must be defined for simulationbased decisionmaking. Multiple criteria are often considered in structural design, sometimes conflicting with each other. One wishes, for example, to reduce the weight of a component while increasing its stiffness. For clarity, a single requirement is proposed here. Applying the displacement U_{Contact} generates a bending deformation, which produces a force that varies in time as the latch relaxes from its initial deflection. The dynamically applied load that results produces stresses in the material. A common performance criterion is to ensure that the material can withstand these stresses without breaking. The design is said to be requirementcompliant if the maximum stress anywhere in the latch does not exceed a critical stress value:
The upper bound is defined as a fraction of the yield stress that indicates material failure:
where f_{S} denotes a safety factor defined in 0 ≤ f_{S} ≤ 1. The yield stress is σ_{Yield} = 55 MPa for the generic polycarbonate material analyzed.
With this formulation, the analyst can select a desired safety factor and ascertain how much uncertainty can be tolerated given this requirement. A wellknown tradeoff, which is observed in Sect. 10.4, is that demanding more performance by selecting a larger value of f_{S} renders the design more vulnerable (or less robust) to modeling uncertainty.
The analysis of mechanical latches is a mature field after several decades of their common use in many industries. An example is given in BASF (2001), where peak strain values obtained for latches of different geometries are estimated using a combination of closedform formulae and empirical factors. While these simplifications enable derivations “by hand,” they also introduce assumptions that are thought to be inappropriate for this design problem. The decision is therefore made to rely on a finite element representation (Zienkiewicz and Taylor 2000) to estimate stresses that result from imposing the displacement (10.1) and assess whether the requirementcompliant condition (10.2) is met. The finite element model discretizes the latch’s geometry into elemental volumes within which the equationsofmotion are solved. The Newmark algorithm is implemented to integrate the equations in time (Newmark 1959). This procedure also introduces assumptions, such as the type of interpolation function selected for the elemental volumes. These choices add to the discretization of the geometry to generate truncation errors. Even though they are important, these meshsize and runtime considerations are not discussed in order to keep our focus on the infogap analysis of robustness.
To briefly motivate the modeling choices made, it suffices to state that the threedimensional representation of the latch’s geometry yields a more realistic prediction of the deformation shape by capturing the curvature caused by the applied load. This is illustrated in Fig. 10.7, which depicts the computational mesh and deformation pattern resulting from applying the nominal displacement of 0.50 mm. This simulation is performed with standalone, MATLAB^{®}based, finite element software developed by the authors. The predicted deformation, i.e., displacements such as those depicted in Fig. 10.7, and corresponding forces are extracted and provided to a onedimensional approximation based on linear beam theory to estimate the peak stress, σ_{Max}. Predicting σ_{Max} depends, naturally, on choices made to setup the simulation such as values of the applied displacement (“Is the displacement equal to 0.50 mm or something else?”) and material properties (“Are the stiffness and density properties prescribed using the nominal values or something else?”). Next, an analysis of robustness is carried out to assess the extent to which the design will remain requirementcompliant even if some of these assumptions are changed.
4 The InfoGap Robust Design Methodology
This section discusses the methodology applied to achieve an infogap robust design. Three issues need to be discussed before illustrating how the robustness function is calculated and utilized to support decisionmaking. The first issue is to define the design space. The second issue is to determine the sources of uncertainty against which the design must be robust. The third question is how to represent this uncertainty mathematically without imposing unwarranted assumptions. These choices are discussed before showing how the robustness function is derived.
Several parameters of the geometry are available to define the design space, including the length, width, thickness, and overall configuration of the latch’s geometry. For computational efficiency, it is desirable to explore an assmallaspossible design space while ensuring that the parameters selected for design optimization exercise an assignificantaspossible influence on performance, which here is the peak stress, σ_{Max}, of Eq. (10.2).
The first issue is to define the design space by judiciously selecting parameters that describe the geometry of the latch. This is achieved using global sensitivity analysis to identify the most influential parameters (Saltelli et al. 2000). Five sizing parameters are considered. They are the total length (L), width (W_{C}), and depth (D_{C}) of the latch; and geometry (length, L_{M}, and depth, D_{H}) of the surface where the displacement U_{Contact} is applied. An analysisofvariance is performed based on a threelevel, fullfactorial design of computer experiments that requires 3^{5} = 243 finite element simulations. The parameters L (total length) and W_{C} (width) are found to account for approximately 76% of the total variability of peakstress predictions when the five dimensions (L; W_{C}; D_{C}; D_{H}; L_{M}) are varied between their lower and upper bounds. Design exploration is therefore restricted to the pair (L; W_{C}).
The second issue is to define the sources of modeling uncertainty against which the design must be robust. This uncertainty represents the fact that realworld conditions might deviate from what is assumed in the simulation model. To make matters more complicated, the magnitude of these deviations, which indicates by how much the model could be incorrect, is unknown. Furthermore, precise probability distributions are lacking. It is essential that the representation of uncertainty can account for these attributes of the problem without imposing unwarranted assumptions.
Table 10.1 defines the sources of modeling uncertainty considered in the analysis. The first four variables (E; G; ν; ρ) represent the variability of polycarbonate plastics. The nominal values (third column) and typical ranges (fourth column) originate from surveying material properties published by various manufacturers. We stress, however, that the actual values may fall outside of these ranges. The fifth variable, U_{Contact}, accounts for uncertainty of the actual displacement to which the latch might be subjected when opening and closing the compartment door. The dynamic load overshoot factor (sixth variable), F_{OS}, is purely numerical. It expresses that the actual loading may differ from how the displacement condition is specified in the simulation. Variable F_{OS} is used as a scaling factor that changes the dynamic overshoot resulting from the application of a shortduration transient load.
The modulus of elasticity (E) is an uncertain material property. It is estimated at 2.0 GPa, and it is confidently known that it will not be less than this value, though it could be greater by one GPa or more. The most that can be said about this variable is that it falls in a onesided range of unknown size, which can be represented by an unbounded family of nested intervals:
where E^{(0)} = 2.0 GPa and \( W_{\text{E}}^{{({\text{Upper}})}} \) = 1.0 GPa. In this formulation, the quantity h represents the unknown horizonofuncertainty (h ≥ 0). Likewise, the nominal value of the shear modulus (G) is 0.73 GPa, an estimate that could err as low as 0.71 GPa (or less), and as high as 1.11 GPa (or more). Thus, a family of nested asymmetric intervals captures the uncertainty in G:
where G^{(0)} = 0.73 GPa, \( W_{G}^{{\left( {\text{Lower}} \right)}} \) = 0.02 GPa, and \( {\text{W}}_{\text{G}}^{{\left( {\text{Upper}} \right)}} \) = 0.38 GPa. Uncertainty in ν and ρ is represented by uncertain intervals similar to Eq. (10.5).
The fifth variable of Table 10.1 is the displacement U_{Contact}. The latch must allow reliable opening and closing of the compartment door for a nominal 0.50mm displacement. This value, however, is only an estimate and the range defined in the table (0.20 mm ≤ U_{Contact} ≤ 0.80 mm) expresses that the applied displacement is unknown. There is no fundamental reason that U_{Contact} cannot be less than 0.20 mm or cannot exceed 0.80 mm. These are estimates based on extreme events, which are typically poorly known because they tend to be postulated rather than being observed. A formulation with nested intervals acknowledges that this range is uncertain:
where \( U_{\text{Contact}}^{\left( 0 \right)} \) = 0.50 mm and \( W_{U}^{{\left( {\text{Lower}} \right)}} \) = \( W_{U}^{{\left( {\text{Upper}} \right)}} \) = 0.30 mm.
The above description of intervals for variables (E; G; ν; ρ; U_{Contact}; F_{OS}) addresses the third and final question, which is how to mathematically represent the model uncertainty. Little is typically known about sources of uncertainty such as these in the early stage of a design. For this reason, and to avoid injecting unsubstantiated assumptions in the analysis, no probability law or membership function is assumed. Even the ranges listed in Table 10.1 are questionable, as collecting more information or choosing a different material for manufacturing could yield values outside of these assumed intervals. For these reasons, the uncertainty of each variable is represented as a range of unknown size, which defines a family of sixdimensional hypercubes:
with h ≥ 0. The vector \( {\varvec{\uptheta}} = \left( {\theta_{k} } \right)_{1 \le k \le 6} \) collects the six variables (E; G; ν; ρ; U_{Contact}; F_{OS}), and \( \theta_{k}^{\left( 0 \right)} \) denotes a nominal value (third column of Table 10.1). The IG model of uncertainty, U(h), is not a single set (hypercube in this case), but rather an unbounded family of nested sets (hypercubes). The hypercubes grow as h gets larger, endowing h with its meaning as a horizonofuncertainty. The scaling coefficients \( W_{k}^{{\left( {\text{Lower}} \right)}} \) and \( W_{k}^{{\left( {\text{Upper}} \right)}} \) are set such that the assumed ranges (fourth column of Table 10.1) are recovered when h = 1. It is emphasized that this definition is arbitrary. What is essential in this formalism is that the horizonofuncertainty, h, is unknown, which expresses our ignorance of the extent to which modeling assumptions might deviate from reality. The definition of Eq. (10.7) makes it explicit that there is no worst case, since the horizonofuncertainty can increase indefinitely.
Table 10.2 summarizes the components of infogap analysis for the latch problem, as defined in Chap. 5. For a given horizonofuncertainty, h, numerical values of the six variables are selected from the IG model of uncertainty, U(h), of Eq. (10.7). These variables define a single realization of the system model analyzed to evaluate the performance of the latch, which is defined herein as a peak stress. This is repeated with newly selected values from U(h) until the uncertainty model at this horizonofuncertainty has been thoroughly explored and the maximal (worst) stress, \( \mathop {\hbox{max} }\limits_{{\left\{ {\theta \in U\left( h \right)} \right\}}} \sigma_{\text{Max}} \left( {\varvec{\uptheta}} \right) \), has been found. The maximal stress can be compared to the compliance requirement of Eq. (10.2). Equation (10.8) shows how searching for the maximal stress within the uncertainty model, U(h), relates to the robustness of the design.
Note that the IG uncertainty model (10.7) does not introduce any correlation between variables, because such information is usually unknown in an early design stage. A correlation structure that would be only partially known can easily be included. An example of infogapping the unknown correlation of a financial security model is given in BenHaim (2010).
At this point of the problem formulation, a twodimensional design space p = (L; W_{C}) is defined together with the performance requirement (10.2). Modeling uncertainty is identified in Table 10.1 and represented mathematically in Eq. (10.7). The finite element simulation indicates that the peak stress experienced by the nominal design is σ_{Max} = 28.07 MPa, which does not exceed the yield stress of 55 MPa and provides a safety factor of f_{S} = 49%. Even accounting for truncation error introduced by the lack of resolution in the discretization (see the discussion of Fig. 10.9), the conclusion is that the nominal design is requirementcompliant.
The question we wish to answer is whether the design remains requirementcompliant if realworld conditions to which the latch might be subjected deviate from those assumed in the simulation. More explicitly, we ask: What is the greatest horizonofuncertainty, \( \hat{h} \), up to which the predicted peak stress, σ_{Max}, does not violate the requirement (10.2) for all realizations of the uncertain variables in the infogap model (10.7)? The question is stated mathematically as:
where \( \hat{h} \) is the robustness of the design given a performance requirement, σ_{Critical}.
Answering this question amounts to assessing how performance, such as the peak stress σ_{Max} here, evolves as increasingly more uncertainty is explored using the simulation model (BenHaim 2006). Section 10.5 shows how robustness functions of competing designs can be utilized to support decisionmaking. Figure 10.8 conceptually illustrates how robustness of the design is evaluated, as we now explain.
The robustness function, which is progressively constructed in Fig. 10.8 by exploring largeruncertainty spaces, U(h), maps the worstcase performance as a function of horizonofuncertainty. Its shape indicates the extent to which performance deteriorates as increasingly more uncertainty is considered. A robust design is one that tolerates as much uncertainty as possible without entering the “failure” domain (red region) for which the requirement is no longer met.
Applying the concept of robustness to the latch design problem is simple. One searches for the maximal (worst) stress, \( \mathop {\hbox{max} }\limits_{{\left\{ {{\varvec{\uptheta}} \in U\left( h \right)} \right\}}} \sigma_{\text{Max}} \left( {\varvec{\uptheta}} \right) \), obtained from finite element simulations where the variables θ vary within the uncertainty space U(h) defined in Eq. (10.7). As stated in Eq. (10.8), robustness is the greatest size of the uncertainty space such that the design is requirementcompliant irrespective of which model is analyzed within U(\( \hat{h} \)). Said differently, all system models belonging to the uncertainty space \( U(\hat{h}) \) are guaranteed to meet the performance requirement of Eq. (10.2). The horizonofuncertainty, h, is nevertheless unknown and may exceed \( \hat{h} \). Not all system models in uncertainty sets U(h), for h greater than \( \hat{h} \), are compliant.
The procedure, therefore, searches for the worstcase peak stress within the uncertainty space U(h). This is a global optimization problem (Martins and Lambe 2013) whose resolution provides one datum for the robustness function, such as the point (σ_{1}; h_{1}) in Fig. 10.8b that results from exploring the uncertainty space U(h_{1}). For simplicity, the uncertainty spaces illustrated on the left side of the figure are represented with two variables, θ = (θ_{1}; θ_{2}). It should not obscure the fact that most applications will deal with largersize spaces (the latch has six variables θ_{k}). Figure 10.8c indicates that the procedure is repeated by increasing the horizonofuncertainty from h_{1} to h_{2}, hence performing an optimization search over a larger space U(h_{2}).
The procedure outlined above stops when requirementcompliance is no longer guaranteed—that is, as the worstcase peak stress exceeds the critical stress σ_{Critical}. The corresponding point \( (\sigma_{\text{Critical}} ; \hat{h} \)) is necessarily located on the edge of the (red) failure region. By definition of robustness (10.8), \( \hat{h} \) is the maximum level of uncertainty that can be tolerated while guaranteeing that the performance requirement is always met.
Figure 10.9 depicts the robustness function of the nominal latch design. It establishes the mapping between critical stress (σ_{Critical}, horizontal axis) and robustness (\( \hat{h} \), vertical axis). The horizontal gray lines indicate the truncation error that originates from discretizing the continuous equations on a finitesize mesh (Hemez and Kamm 2008). They are upper bounds of truncation error that can be formally derived (Mollineaux et al. 2013) and numerically evaluated through mesh refinement to support sensitivity and calibration studies (van Buren et al. 2013). For the nominal geometry (L = 3.9 mm and W_{C} = 4.0 mm), running the simulation with a 200μm mesh on a dualcore processor of a laptop computer takes four minutes and produces a truncation error of 4.76 MPa. A run with a 100μm mesh reduces this upper bound of error to 1.63 MPa at the cost of a 54min solution time. The need to perform several hundred runs to estimate the robustness function motivates the choice of the 200μm mesh resolution. The resulting level of truncation error (4.76 MPa) is acceptable for decisionmaking, since it represents a numerical uncertainty of only ≈10% relative to the critical stress (σ_{Critical} = 55 MPa).
The robustness function is obtained by continuing the process suggested in Fig. 10.8, where more and more points are added by considering greater and greater levels of horizonofuncertainty. For any performance limit σ_{Critical} on the horizontal axis, the corresponding point on the vertical axis is the greatest tolerable uncertainty, namely, the robustness \( \hat{h}\left( {\sigma_{\text{Critical}} } \right) \). The positive slope of the robustness function indicates a tradeoff between performance requirement and robustness, as we now explain. Suppose that the analyst is willing to allow the peak stress to reach 55 MPa, which provides no safety margin (f_{S} = 0). From Fig. 10.9, it can be observed that the greatest tolerable horizonofuncertainty is \( \hat{h} \) = 0.40. (Note that this value accounts for truncation error, which effectively “shifts” the robustness function by 4.76 MPa to the right.) It means that the design satisfies the performance requirement (10.2) as long as none of the model variables θ = (E; G; v; ρ; U_{Contact}; F_{OS}) deviates from its nominal value by more than 40%. Said differently, the design is guaranteed to satisfy the critical stress criterion as long as realworld conditions do not deviate from nominal settings of the simulation by more than 40%, even accounting for truncation effects.
Suppose, however, that the analyst wishes to be more cautious—for instance, by requiring that the peak stress not exceed 45 MPa. Now the safety factor is f_{S} = 18%. From Fig. 10.9, not exceeding this peak stress is satisfied if model variables do not deviate from their nominal values by more than approximately 10%. In other words, the more demanding requirement (f_{S} = 18%) is less robust to uncertainty (\( \hat{h} \) = 0.10) than the less demanding requirement (f_{S} = 0 and \( \hat{h} \) = 0.40). More generally, the positive slope of the robustness function expresses the tradeoff between greater caution in the mechanical performance (increasing f_{S}) and greater robustness against uncertainty in the modeling assumptions (increasing \( \hat{h} \)).
Choices offered to the decisionmaker are clear. If it can be shown that realworld conditions cannot possibly deviate from those assumed in the simulation model by more than 40%, then the nominal design is guaranteed requirementcompliant. Otherwise an alternate design offering greater robustness should be pursued. This second path is addressed next.
5 Assessment of Two Competing Designs
This section illustrates how robustness functions, such as the one discussed in Sect. 10.4, can be exploited to select between two competing designs. Figure 10.10 depicts a comparison between geometries that differ in their choices of design variables, p = (L; W_{C}). The left side is the nominal geometry (L = 3.9 mm, W_{C} = 4.0 mm), and the right side shows a 20% larger design (L = 4.68 mm, W_{C} = 4.80 mm). Given that the thickness is kept the same in both geometries, the volume of the variant design increases by ≈44%. This consideration is important because selecting the variant design would imply higher manufacturing costs. The decisionmaker, therefore, would want to establish that the performance of the variant geometry is significantly more robust to the modeling uncertainty than what the nominal design achieves.
Figure 10.11 compares the robustness functions of the nominal and 20% larger geometries. The bluesolid line identifies the nominal design, and the variant is shown with a greendashed line. Horizontal gray lines quantify the upper bounds of truncation error that originates from mesh discretization. The results are meaningful precisely because the prediction uncertainty due to truncation effects is sufficiently small with the mesh discretization used.
Figure 10.11 illustrates that, when no modeling uncertainty is considered, the variant design clearly predicts a better performance. This is observed on the horizontal axis (at \( \hat{h} \) = 0) where the peak stress of the variant geometry (σ_{Max} = 16 MPa) is less than half the value for the nominal design (σ_{Max} = 34 MPa). This result is consistent with the fact that in the variant design the applied force is spread over a larger surface area, which reduces stresses generated in the latch.
Suppose that the analyst requires a safety factor of f_{S} = 18%, implying that the stress must be no greater than 45 MPa. As observed in Fig. 10.9 (reproduced in the blue curve of Fig. 10.11), the nominal geometry tolerates up to 10% change in any or all of the model variables without violating the performance requirement. The largersize geometry, however, can tolerate up to 100% change without violating the same requirement. In other words, the variant design is more robust (\( \hat{h} \) = 1.0 instead of \( \hat{h} \) = 0.10 nominally) at this level of stress (σ_{Critical} = 45 MPa).
The slopes of the two robustness functions can also be compared in Fig. 10.11. The slope represents the tradeoff between robustness and performance requirement. A steep slope implies a low cost of robustness that can be increased by a relatively small relaxation of the required performance. The figure suggests that the cost of robustness for the nominal design (blue curve) is higher than for the variant geometry (green curve). Selecting the 20% larger design is undoubtedly a better decision, given that it delivers better predicted performance (lower predicted value of σ_{Max}) and is less vulnerable to potentially incorrect modeling assumptions. In fact, the variant design offers an 18% safety margin (f_{S} = 18%) even if model variables deviate from their nominal values by up to 100% (\( \hat{h} \) = 1.0). The only drawback of the variant design is the ≈44% larger volume that increases manufacturing costs relative to those of the nominal design.
6 Concluding Remarks
This chapter has presented an application of simulationbased IG robust design. The need for robustness stems from recognizing that an effective design should guarantee performance even if realworld conditions deviate from modeling and analysis assumptions. Infogap robustness is versatile, easy to implement, and does not require assuming information that is not available.
IG robust design is applied to the analysis of a mechanical latch for a consumer electronics product to provide a simple, mechanical illustration. The performance criterion is the peak stress at the base of the latch resulting from displacements that are applied to open or close the compartment. The geometry, simulation model, and loading scenario are simplified for clarity. Round corners, for example, that mitigate stress concentrations, are altered to straight edges. Likewise, severe impact loads experienced when dropping the device on a hard surface are not considered. The description of the analysis, however, is comprehensive and can easily be translated to other, more realistic, applications.
The robustness of the nominal design is studied to assess the extent to which performance is immune to sources of uncertainty in the problem. This uncertainty expresses the fact that realworld conditions could differ from what is assumed in the simulation without postulating either probability distributions or knowledge of worst cases. One example of how realworld conditions can vary from modeling assumptions is the variability of material properties. Uncertainty also originates from assumptions embodied in the simulation model that could be incorrect. One example is the dynamic overshoot factor used to mitigate the ignorance of how materials behave when subjected to fasttransient loads. The analysis of the mechanical latch pursued in this chapter indicates that the design can tolerate up to 40% uncertainty without exceeding the peakstress performance requirement.
The performance of an alternate design, which proposes to spread the contact force over a larger surface area, is assessed for its ability to provide more robustness than the nominal design. The analysis indicates that the variant latch is predicted to perform better, while its robustness to modeling uncertainty is greater at all performance requirements. The variant geometry features, however, a 44% larger volume, which would imply higher manufacturing costs. The discussion presented in this chapter illustrates how an analysis of robustness helps the decisionmaker answer the question of whether an improvement in performance, or the ability to withstand more uncertainty about realworld conditions, warrants the cost associated with a design change.
The simplicity of the example discussed here should not obscure the fact that searching for a robust design might come at a significant computational expense if the simulation is expensive or the uncertainty space is largedimensional. This is nevertheless what automation is for and what software is good at. Developing the technology to perform largescale explorations frees the analyst to apply his/her creativity to more challenging aspects of the design.
References
Bamber, J. L., Riva, R. E. M., Vermeersen, B. L. A., & Le Brocq, A. M. (2009). Reassessment of the potential sealevel rise from a collapse of the West Antarctic Ice Sheet. Science, 324(5929), 901–903.
BenHaim, Y., Dasco, C. C., Carrasco, J., & Rajan, N. (2009). Heterogeneous uncertainties in cholesterol management. International Journal of Approximate Reasoning, 50, 1046–1065.
BenHaim, Y. (2006). Infogap decision theory: Decisions under severe uncertainty (2nd edn.). Oxford: Academic Press Publisher.
BenHaim, Y. (2010). Infogap economics: An operational introduction (pp. 87–95). PalgraveMacmillan.
Ferson, S., & Tucker, W. T. (2008). Probability boxes as infogap models. In Annual Conference of the North American Fuzzy Information Processing Society—NAFIPS 2008. Article number 4531314.
Hall, J. W., Lempert, R. J., Keller, K., Hackbarth, A., Mijere, C., & McInerney, D. J. (2012). Robust climate policies under uncertainty: A comparison of robust decision making and infogap methods. Risk Analysis, 32(10), 1657–1672.
Hemez, F. M., & Kamm, J. R. (2008). A brief overview of the stateofthepractice and current challenges of solution verification. In Computational methods in transport: Verification and validation (pp. 229–250). Springer Publisher.
Malone, R. C., Smith, R. D., Maltrud, M. E., & Hecht, M. W. (2003). Eddyresolving ocean modeling. Los Alamos Science, 28, 223–231.
Martins, J. R. R. A., & Lambe, A. B. (2013). Multidisciplinary design optimization: A survey of architectures. AIAA Journal, 51, 2049–2075.
Metropolis, N., & Ulam, S. (1949). The Monte Carlo method. Journal of the American Statistical Association, 44, 335–341.
Mollineaux, M. G., Van Buren, K. L., Hemez, F. M., & Atamturktur, S. (2013). Simulating the dynamics of wind turbine blades: Part I. Model Development and Verification, Wind Energy, 16, 694–710.
Newmark, N. M. (1959). A method of computation for structural dynamics. ASCE Journal of Engineering Mechanics, 85, 67–94.
Oden, J. T., Belytschko, T., Fish, J., Hughes, T. J. R., Johnson, C., Keyes, D., Laub, A., Petzold, L., Srolovitz, D., & Yip, S. (2006) Revolutionizing engineering science through simulation. In National science foundation blue ribbon panel on simulationbased engineering.
Saltelli, A., Chan K., & Scott, M. (2000). Sensitivity analysis. Wiley.
Smith, R., & Gent, P. (2002). Reference manual for the parallel ocean program (POP), Ocean Component of the Community Climate System Model, National Center for Atmospheric Research, Boulder, CO. (Also, Technical Report LAUR022484 of the Los Alamos National Laboratory, Los Alamos, NM.).
United Nations. (2014). United Nations Department of Economic and Social Affairs (DESA) Continent Population 1950 to 2100, Wikimedia Commons, The Free Media Repository, Retrieved October 27, 2017, From: commons.wikimedia.org/w/index.php?title=File:UN_DESA_continent_population_1950_to_2100.svg&oldid=130209070.
van Buren, K. L., Mollineaux, M. G., Hemez, F. M., & Atamturktur, S. (2013). Simulating the dynamics of wind turbine blades: Part II, model validation and uncertainty quantification. Wind Energy, 16, 741–758.
Wu, Y.T. (1994). Computational method for efficient structural reliability and reliability sensitivity analysis. AIAA Journal, 32, 1717–1723.
Zienkiewicz, O. C., & Taylor, R. L. (2000). The finite element method, volume 1: The basis. ButterworthHeinemann Publisher.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2019 The Author(s)
About this chapter
Cite this chapter
Hemez, F.M., Van Buren, K.L. (2019). InfoGap (IG): Robust Design of a Mechanical Latch. In: Marchau, V., Walker, W., Bloemen, P., Popper, S. (eds) Decision Making under Deep Uncertainty. Springer, Cham. https://doi.org/10.1007/9783030052522_10
Download citation
DOI: https://doi.org/10.1007/9783030052522_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783030052515
Online ISBN: 9783030052522
eBook Packages: Business and ManagementBusiness and Management (R0)