Classification and Moral Evaluation of Uncertainties in Engineering Modeling
- First Online:
- Cite this article as:
- Murphy, C., Gardoni, P. & Harris, C.E. Sci Eng Ethics (2011) 17: 553. doi:10.1007/s11948-010-9242-2
Engineers must deal with risks and uncertainties as a part of their professional work and, in particular, uncertainties are inherent to engineering models. Models play a central role in engineering. Models often represent an abstract and idealized version of the mathematical properties of a target. Using models, engineers can investigate and acquire understanding of how an object or phenomenon will perform under specified conditions. This paper defines the different stages of the modeling process in engineering, classifies the various sources of uncertainty that arise in each stage, and discusses the categories into which these uncertainties fall. The paper then considers the way uncertainty and modeling are approached in science and the criteria for evaluating scientific hypotheses, in order to highlight the very different criteria appropriate for the development of models and the treatment of the inherent uncertainties in engineering. Finally, the paper puts forward nine guidelines for the treatment of uncertainty in engineering modeling.
The Summit of the Future of Civil Engineering in 2006 developed The Vision for Civil Engineering in 2025, in which the role of civil engineers is described as serving as “master managers of risk and uncertainty caused by natural events, accidents, and other threats; and leaders in discussions and decisions shaping public environmental and infrastructure policy” (ASCE 2010). Elaborating on this vision at a Summit on Guiding Principles for Critical Infrastructures in December 2008, Lewis E. Link stressed that “we are living in a very risky world” and that risk “needs to become an integral part” of our decision- and policy-making in regard to infrastructure management (Reid 2009).
For engineers, dealing with risk and uncertainty is an important part of their professional work. Uncertainties are involved in understanding the natural world, such as knowing whether a particular event will occur, and in knowing the performance of engineering works, such as the behavior and response of a structure or infrastructure, the variability in material properties (e.g., characteristics of soil, steel, or concrete), geometry, and external boundary conditions (e.g., loads or physical constraints). Such uncertainties produce risks. In the standard account risk is the product of a set of possible consequences and their associated probabilities of occurrence (Kaplan and Gerrick 1981), where the probabilities quantify the likelihood of occurrence of the potential consequences in light of the underlying uncertainties.
One important use of models in engineering risk analysis is to quantify the likelihood or probability of the occurrence of specific events or a set of consequences. Such models are often referred to as probabilistic models to highlight their specific function to account for and quantify uncertainties (Gardoni et al. 2002). This paper focuses on the classification and moral evaluation of the various sources of the uncertainties that underlie the modeling process in engineering. While an extensive literature exists on the evaluation, including the moral evaluation, of risk, little attention has been paid to the evaluation of the various kinds of uncertainties and, consequent to that evaluation, the determination of the proper response by engineers to them.
There are four sections in this paper. The first provides a brief description of the role of modeling in engineering. The second identifies three stages in engineering modeling and the types of uncertainty associated with each stage. The third and fourth sections discuss the different goals of science and engineering, a contrast which is especially important for this discussion. We proceed to a consideration of the ways this general contrast plays out with regard to uncertainty and the use of models. The fourth section also articulates criteria for evaluating the uncertainties faced by engineers, drawing on our account of the distinctive function of engineering.
Modeling in Engineering
Models play a central role in engineering. Engineers Armen Der Kiureghian and Ove Ditlevsen (2009) write: “Engineering problems, including reliability, risk and decision problems, without exception, are solved within the confines of a model.” Many models in engineering are mathematical models. Such models represent an abstract and idealized version of the mathematical properties of the target (Weinert 1999). For example, a model might be a mathematical expression that relates one or more quantity of interest (e.g., a probability of failure, the structural response of a building or bridge subject to a natural hazard, the structural capacity of a building or bridge) to a set of input variables (e.g., material properties and geometry of a building or bridge). More specifically, engineers have developed models to predict the deformation of a building or bridge subject to an earthquake ground motion, for example. Such models might capture the characteristics of the structure (geometry and material properties), expressed through mathematical equations, that relate the deformations of the structure or structural members to the applied forces that might be generated by an earthquake. Other models might, for example, be used to estimate the probability of occurrence of future events such as earthquakes based on past records (such models are often empirical equations that are calibrated by statistical analysis) and/or on geological considerations.
Using models, engineers can investigate and acquire understanding of how an object or phenomena will perform under specified conditions. Models are often used by engineers to predict the performance of products. Computer simulations are often used to derive results from mathematical equations or solve equations, especially in cases where models are dynamic, or evolving over time, or to solve optimization problems. For example, Monte Carlo simulations might be used to estimate the probability of occurrence of a specified event by repeating the same simulation a number of times using randomly generated values for the input random variables.
Uncertainty in Engineering Modeling
It is generally recognized that there are uncertainties inherent in the modeling process. Gardoni et al. (2002) and Hansson (2006) have identified and distinguished different kinds of uncertainty. However, there exists no systematic account of the specific kinds of uncertainty that arise in the various stages of modeling. We argue in this section that there are three main stages in which uncertainty must be treated in modeling: (1) development of a model, (2) application or implementation of a model, and (3) analysis of the results of a model. It is important to recognize that these three stages capture conceptually distinct phases of modeling. However, they are an abstraction and in practice there might be some ambiguity in defining the boundaries between each stage. There is also a potentially iterative interaction between each stage (e.g., the application might influence the development, and the results might suggest a more refined and careful model). Below we identify and discuss the specific kinds of uncertainty associated with each stage of modeling, drawing on the categories of uncertainty found in Gardoni et al. (2002) and Hansson (2006).
Uncertainty in Developing a Model
Model Inexactness This type of uncertainty arises when approximations are introduced in the formulation of a model. There are two essential problems that may arise: error in the form of the model (e.g., a linear expression is used when the actual relation is nonlinear), and missing variables (i.e., the model contains only a subset of the variables that influence the quantity of interest). For example, a model might be developed to predict the corrosion rate of steel. In developing the model a linear dependence might be assumed on the chloride concentration, which is known to affect corrosion. A linear dependence means that if the chloride concentration doubles the corrosion rate also doubles. However, the actual relation between chloride concentration and the corrosion rate might not be linear (e.g., the effects of increasing or decreasing the chloride concentration on corrosion might taper off beyond a certain threshold value). In this case there would be inexactness in the model due to an inaccurate model form. Furthermore, other factors might be influencing the corrosion process, such as the moisture content, which were not included in the model. In this case, inexactness of the model would be due to a missing variable in the model.
Mistaken Assumptions Models are based on a set of assumptions. Uncertainties might be associated with the validity of such assumptions (e.g., problems arise when a model assumes normality or homoskedasticity when these assumptions are violated).
Measurement Error The parameters in a model are typically calibrated using a sample of the measured quantities of interest and the basic variables considered in the model. These observed values, however, could be inexact due to errors in the measurement devices or procedures, which then leads to errors in the calibration process. For example, if a model is constructed to predict the corrosion rate as a function of the chloride concentration, there could be errors in the measurement of the corrosion rates or the chloride concentrations used to calibrate the model. If the model is calibrated with inaccurate data, the resulting model would be incorrect because improperly calibrated.
Statistical Uncertainty Statistical uncertainty arises from the sparseness of data used to calibrate a model. In particular, the accuracy of one’s inferences depends on the observation sample size. The smaller the sample size, the larger is the uncertainty in the estimated values of the parameters. To illustrate, the model to predict corrosion as a function of chloride concentration might have been calibrated using one hundred samples. However, the confidence in the model would likely increase if it was calibrated using one thousand samples. The statistical uncertainty captures our degree of confidence in a model in light of the data used to calibrate the model.
Uncertainty in Applying or Implementing a Model
Randomness in the Basic Variables This type of uncertainty reflects the variability or randomness in the basic variables used as inputs in the developed model. For example, the properties of steel or concrete that should be used as inputs in a model to predict the structural response (e.g., deformation) of a bridge subject to an earthquake might be uncertain, insofar as the actual specific characteristics of each material is influenced by many factors that we might not be able to capture (e.g., quality of the workmanship, or curing process).
Volitional Uncertainty This type of uncertainty is associated with the choices or decisions made in light of the developed probabilistic models and their outcomes. For example, given a model that demonstrates a high corrosion rate of regular steel in the given environment, an engineer might choose to use epoxy-coated, galvanized or stainless steel reinforcement bars in a bridge. Such bars have a greater initial cost than regular steel. However, in highly corrosive environments the use of corrosion-resistant reinforcement bars might significantly lower the expenses associated with maintenance, repair and replacement over the service life of the structure. At the same time, cracks in the epoxy-coating, even when small, can lead to a more aggressive localized corrosion problem. Alternatively, engineers might choose to use glass-fiber reinforced polymer (GFRP) reinforcing bars. However, while GFRP bars are not susceptible to traditional corrosion, research has shown that other types of deterioration mechanisms might occur. In the end, there is uncertainty about which specific choice an engineer will make, because various engineers may evaluate the benefits and limitations of various options differently and their ultimate choices might be influenced by several factors including, for example, their experience.
Human Error This type of uncertainty is associated with errors in the application of a model by human beings, e.g., using a model in a non-applicable area or range, incorrectly applying a model in an applicable area or range (e.g., not satisfying some underlying assumptions like normality or homoskedasticity), illegitimately manipulating a model or data, or forcing a model to have a required outcome. For example, if a model on the corrosion rate was developed for steel traditionally used in civil engineering applications, the same model might not be applicable to another type of steel traditionally used in home appliances, like stainless steel.
Uncertainty in the Results of a Model
Endoxastic uncertainty This type of uncertainty is inherent in the results of a model and affects our confidence in them. It is generated by all the uncertainties described earlier when discussing the uncertainties in the development and application of a model.
Metadoxastic uncertainty This uncertainty concerns the degree of confidence we should have in a model itself or the choice between alternative models. As Hansson (2006) notes, in practice, attention is typically paid only to the most credible risk assessment and “other possible assessments with lower but non-negligible credibility will have no influence on the calculations.” For example, consider the collapse of the Tacoma Narrows Bridge in 1940. The bridge was designed in accordance with state-of-the-art knowledge. However, it failed during its first year of service due to a torsional mechanical resonance associated with a steady mild wind of 67 km/h (42 mph). Such mode of failure had never been experienced before and came about because of a more slender design of the bridge deck. The failure of the Tacoma Narrow Bridge brought to the attention of civil and structural engineers the necessity of accounting for aerodynamics and resonance effects in their analyses. Because this phenomenon was not accounted for before, the actual probability of failure was underestimated in the bridge design.
Aleatory Uncertainties vs. Epistemic Uncertainties
In the context of modeling, the different uncertainties described above can each be characterized as either aleatory uncertainties (from the Latin word alea, the rolling of a dice), or epistemic uncertainties (from the Greek word έπιστήμη (episteme), knowledge) (Hacking 1975, Gardoni et al. 2002, Der Kiureghian, and Ditlevsen, 2009). The difference between the two types of uncertainties is that aleatory uncertainties are irreducible, whereas epistemic uncertainties are reducible, e.g., by the use of improved models, the acquisition of more accurate measurements and the collection of larger samples. Aleatory uncertainties arise from the inherently uncertain or random character of nature. They are thus not influenced by the observer or manner of observation, and as such they cannot be reduced by improving the observation method or protocol. By contrast, epistemic uncertainties often arise from a lack of knowledge, a deliberate choice to simplify matters, errors that arise in measuring observations, and the finite size and number of observation samples.
The division between aleatory and epistemic uncertainties plays an important role in engineering modeling and is often drawn in a context-dependent manner. Depending on the knowledge engineers take into consideration in a particular case and other pragmatic considerations, a given uncertainty may be described as either aleatory or epistemic. To illustrate, consider tossing a coin. From the perspective of an engineer, this event might be considered random where there is a 50% chance that a certain outcome will be obtained. However, from the perspective of physicists the outcome of the toss may not be considered random, but influenced by, for example, the exact angle at which the coin is hit by an individual’s finger, and the force with which it is hit. Including such knowledge from physics in the engineering solution to this problem will change the uncertainty surrounding this event from aleatory to epistemic. As another illustration, if the design of a safe structure does not depend on state-of-the-art scientific knowledge (because, for example, more simplistic theories work sufficiently well or more advanced theories require values for input variables that are not available), then there is a reason to draw the line differently between engineering and science. In designing a structure, classical mechanics (or Newtonian mechanics) is typically sufficient to capture the underlying physics of a problem, without using the more recent theory of relativity. Therefore, the factors not captured by classical mechanics might be considered as aleatory uncertainties in this engineering application. On the other hand, if acquiring robust and comprehensive scientific knowledge is needed to design a safe structure, then there is a reason to categorize uncertainties in the way that science does. Alternately, what might be considered aleatory in daily engineering practice might not be considered aleatory when developing a new prototype or developing a one-of-a-kind structure.
One important question concerns the basis on which engineers should draw the line between aleatory and epistemic uncertainties in any given case. Furthermore, there is a question of the basis on which we should decide whether efforts should be undertaken to reduce epistemic uncertainties. The answers to these questions, we argue later in the paper, depend on understanding the goal of engineering in a society and how a given uncertainty impacts the ability of engineers to achieve this goal. Our discussion of the goal of engineering begins first with an analysis of science and the goal of science, which will provide a helpful contrast for understanding the goal of engineering.
The Discipline of Science
In order to get a better grasp on the unique features of engineering, it will be useful to look first at the sciences. Both the natural and social sciences aim at understanding in some sense: the natural sciences at an understanding of natural events and the social sciences at an understanding of human behavior and communities. It will be more useful, however, to focus on the natural sciences, because they are more relevant to engineering. Whether the natural sciences aim at truth is disputed in the philosophy of science, as the debate between realism and anti-realism illustrates.1 Furthermore, the debate does not show signs of resolution (Parsons 2006). However, the working scientist often takes a more pragmatic attitude toward the issue, frequently settling for whatever works in his or her research endeavors. For example, nobody understands how the act of measuring quanta could cause indefinite properties to become definite. Despite this measurement problem, quantum mechanics tells scientists what they want to know and lets them do what they want to do.
Natural law: When water freezes, it expands with great force.
Initial conditions: The water in the pipes froze solid overnight.
Conclusion: The pipes burst. (Parsons 2006, p. 93).
In this model, explanation is the ability to predict, and prediction is made in terms of laws and initial conditions. There is no formal, objective difference between explanation and prediction: all adequate explanations are predictions and all predictions are explanations. Furthermore, explanations take the form of arguments.2
If we adopt Hempel’s account, the question arises: How are the natural laws arrived at in the first place? The problem is that the relation between a scientific claim (including a scientific law) and its supporting evidence is underdetermined: the evidence does not support only a single claim. So a method is required for deciding when the evidence sufficiently supports a claim. Again, Hempel has proposed the classic account as to how this takes place in his Hypothetico-deductive (HD) model. According to the HD model, the scientific method begins with a hypothesis, continues with a deduction of a testable claim from this hypothesis, and concludes with a prediction of a testable claim that is then refuted or confirmed by experimental procedures. Although subject to important substantive criticisms (Hempel 1966), the HD model is sufficient for the purpose of suggesting that there are steps and procedures.
Even if we accept Hempel’s account, however, prediction is not the only criterion for an acceptable scientific explanation. An ideal scientific explanation must be fruitful in suggesting other explanations, as the theory of natural selection has proven to be. The broader the scope of a theory, i.e. the more natural phenomena it can cover, the more desirable the theory or type of explanation is. If an explanation is consistent with other theories and types of explanation, it is more acceptable. Finally, the criterion of “simplicity” has often been cited, although the term is difficult to define (Baker 2010).
Modeling in Science
Models play a central role in scientific practice (Morgan and Morrison 1999), providing instruments for “experimentation, measurement, and theory development” (Gould 2003). Models provide a representation of a particular object.3 To be a representation of its object, a model must share similar properties with its object of representation (Hesse 1963). At the same time, models are not replicas, and so there must be differences, or negative analogies, between a model and its target. Differences between a model and its object may stem either from idealizations, such as “deliberate simplifications” to exclude properties from consideration that are not relevant to the purpose of the investigation, or from deliberate distortions of features found in the phenomena, such as assumptions of the omniscience of agents in economic markets and infinite velocities in physics. Finally, it may be unclear if specific dimensions of a model match up with the object of representation; such dimensions are neutral analogies (Hesse 1963). Through investigating models, scientists can “discover features of and ascertain facts about the system the model stands for” (Frigg and Hartmann 2006). Learning through models involves constructing a representation relation between the model and target, investigating dimensions of the model to demonstrate certain theoretical claims about it, and converting these claims into claims about the target system.
Treatment of Uncertainty in Science
Models play a role in the scientific enterprise by contributing to our understanding of a new fact or enhancing the accuracy of scientific predictions. In science, as in engineering, we can draw a distinction between aleatory and epistemic uncertainties. However, in science aleatory uncertainties refer to those uncertainties inherent in nature. Epistemic uncertainties are not inherent in nature, but rather stem from our lack of knowledge or certainty about, for example, the accurateness of our model. Given that the goal of the scientific enterprise is understanding, any epistemic uncertainty should be reduced. The presence of epistemic uncertainties calls into question the claim to have acquired a new fact, or to have increased the accuracy of scientific predictions.
The Discipline of Engineering
The aim of engineering is not understanding nature for its own sake, but the creation of artifacts and technology. Unlike science, then, where use of the term “knowledge” is controversial, reference to “engineering knowledge” is appropriate, because the knowledge involved is “knowing how” rather than “knowing that” (Ryle 1949).4 Inventions such as the pulley, level, and wheel are examples of engineering artifacts. As for technology, the definition favored by many contemporary scholars is that technology is best understood as a “‘system’ composed of physical objects and tools, knowledge, inventors, operators, repair people, managers, government regulators, and others” (Van Dusek 2006). The pyramids of Egypt and the roads and aqueducts built by the Romans are examples of technology understood as a system, involving most of the components listed above. In the words of engineer and physicist Theodore von Karaman, “Scientists study the world as it is; engineers create the world that has never been.” Making the same point, Fung and Tong (2001) write, “Engineering is quite different from science. Scientists try to understand nature. Engineers try to make things that do not exist in nature. Engineers stress invention.”5
Engineers draw on a variety of sources to devise possible solutions, including science, and they are often inspired by nature. Indeed, a central feature of modern engineering is the application of mathematics and modern (post-Galilean) science (especially physics) to make useful objects. One could say that engineering is applied science, and this definition has considerable merit, but it obscures the fact that engineering has unique features that differentiate it from science. First, engineering is goal-specific and aims at the fulfillment of a purpose that often has a local or specific character: constructing a particular bridge, creating a particular chemical process, developing a new headlamp for an automobile, developing a new composite, and so forth.
Second, unlike scientific theories and explanations, past technologies are not “refuted” and may exist alongside newer ones if they continue to satisfy some human need. A technologically unsophisticated plow may be more useful in developing countries because it may be easier to use, more easily repaired by the local population, and produced more cheaply. Even though sailing ships have been replaced in commerce, they are still useful for teaching the skills of navigation and for recreation. Some technologies have, however, been abandoned or fallen into disuse in favor of others. The history of the progression from sailing ships to steamships, to diesel-powered ships, to atomic-powered ships is an interesting example.
Third, engineering resolves the underdetermination problem with quite different criteria than the ones appropriate in science. If scientific theories are underdetermined by the facts, engineering problems are underdetermined in that there can usually be more than one solution to an engineering problem. This is because there is no unique criterion by which to decide what counts as the best solution to a given problem, which makes it difficult to rank alternative solutions, and the solution space is often not well-defined, which makes it difficult to account for all possible solutions when selecting the solution to pursue. To use terminology from mathematics, engineering problems are ill-conditioned; that is, there is more than one solution in part because not all inputs are known. Similarly, in engineering, problems are also labeled ill-structured (van de Poel 2001). Criteria for good engineering design, or external constraints, help limit the possible number of engineering solutions. These include availability of resources; the cost-effectiveness of the design; ease of manufacture, construction and repair; safety; environmental impact and sustainability; elegance and aesthetic appeal; and others.
Treatment of Uncertainty in Engineering
We noted earlier that given that the central goal of the scientific enterprise is to understand new facts about the nature of the world or enhance the accuracy of scientific predictions, there is always reason to reduce epistemic uncertainty in scientific models. The very presence of epistemic uncertainties calls into question the claim to have succeeded in furthering the goal of science. However, the same treatment of uncertainty, including epistemic uncertainty, is not required in engineering. In contrast with science, the central goal of engineering is not understanding for its own sake, but rather the creation, modification, and invention of useful artifacts and technology that both satisfy the societal needs to which they are designed to respond and respect central societal constraints. Given this goal, the question then becomes: how should uncertainty be treated in engineering modeling? Below we develop a set of guidelines for the treatment of uncertainty in engineering modeling, which are designed to promote the central goal of engineering.
In our view, the starting point for any set of guidelines must be the recognition of the fact that uncertainty is inherent in the engineering enterprise. Innovation and invention are central to the drive in engineering to create useful objects for communities. By their nature, innovation and invention are uncertain, carrying unforeseen consequences and risks. In our view, if engineers were required to avoid uncertainties at all cost, we would undermine the capacity of engineers to be innovative and inventive, and, ultimately, we would severely limit the ability of engineers to fulfill their role in society. Accepting some degree of uncertainty is necessary to realize the aspiration for innovation and invention.
However, recognizing the need to accept at least some uncertainties does not give engineers complete freedom in creating new technologies regardless of the associated uncertainties and risk. That is, the inventive and innovative character of engineering does not entail that all uncertainty must be accepted. We believe engineers must innovate in a responsible manner. Below we spell out what constitutes the appropriate way for engineers to deal with uncertainties.
The first guideline is that engineers acknowledge the uncertainty inherent in their work. While this guideline might seem obvious, this is not a trivial point in practice. Engineers far too often only consider point estimates of the model inputs or outcomes (e.g., using the mean or expected values of the variables and/or models or prima facie conservative guesses) that either ignore or do not explicitly account for the uncertainties in the inputs or outcomes. Point estimates based on “worst case scenarios” implicitly account for some uncertainties. For example, in estimating the capacity of a structural member, engineers often make approximations that are likely to lead to a capacity that is smaller than the actual capacity, therefore underestimating the actual force or deformation a member could take before failure. However, by not explicitly accounting for the uncertainties, it might be difficult to assess the actual risk, which requires knowing the actual capacity and the likely departure from it. So, engineers should explicitly account for the underlying uncertainties in their work. Acknowledging uncertainty is a precondition for making principled and well-educated decisions about how to treat uncertainties and about the acceptability of risks.
The second guideline is that engineers evaluate the necessity or importance of innovation and invention in any particular case. If there is no societal need for or value in, for example, a new technology or a newly designed artifact (e.g., in terms of reduced costs, longer reliability, or higher safety), then putting forward a new technology or design might not be justified. A new technology or design will carry new uncertainties. In this case, using more familiar technologies and normal designs might be preferable.
For those cases where innovation is judged important or necessary, engineers must then evaluate the associated new uncertainties as acceptable or not. The third guideline is that engineers must determine whether such uncertainties are aleatory or epistemic. The division between aleatory and epistemic uncertainty reflects the distinction between reducible and non-reducible uncertainties. As we noted earlier, judgment is involved in drawing the line between epistemic and aleatory uncertainties. The external constraints informing engineering problems should influence where the boundary between aleatory and epistemic uncertainties is drawn. In engineering we may often find it useful to classify something as aleatory, if we find that we do not need to delve deeper (e.g., by using non-Newtonian physics) for engineering purposes. That is, some factors that might be considered as epistemic in a scientific context might be treated sufficiently well in engineering as aleatory.
Aleatory uncertainties, such as randomness in the basic variables, are those that cannot be reduced by a modeler or the manner of modeling. Thus the choice an engineer faces is either to accept or reject such uncertainties, along with the technologies that create such uncertainties. The fourth guideline is that engineers evaluate the acceptability of aleatory uncertainties on the basis of the acceptability of the risks associated with them. The process of determining the acceptability of risks is complex, as there are a number of different factors that may influence whether or not a risk is judged acceptable. For example, consideration may be given as to whether minimum threshold levels of well-being are sufficiently likely to be sustained should a risk be realized (Murphy and Gardoni 2008). Alternately, the acceptability of such risks can be determined using a broadly utilitarian process of cost/risk-benefit analysis (Sunstein 2005; Hansson 2007; Macpherson 2008). In cost/risk-benefit analysis, two or more risks are compared on the basis of their respective advantages and disadvantages, or consequences. Benefits and risks might be categorized quite broadly, to include environmental, marketability, and efficiency considerations. Characteristically, a numerical, often monetary, value is assigned to every consequence. Risks are acceptable insofar as they have a greater benefit-to-risk ratio. In addition to considering risk-benefit information, judgments about the acceptability of risks may take into account whether risks are voluntarily assumed or involuntarily imposed on individuals (Cranor 1990); the relationship between who bears the negative consequences should a risk be realized and who stands to benefit from creating or imposing certain risks (Cranor 2007); the distribution of risks across a population (Harris et al. 2009); and the process by which a risk is created, sustained, or allowed (Wolff 2006; Murphy and Gardoni 2010).
The fifth guideline is similar to the fourth but it applies to the epistemic uncertainties. In this case, the decision to accept (in full or in a reduced form) or reject such uncertainties should be based on the comparison of the potential costs and benefits associated with accepting optimally reduced uncertainties against the potential costs and benefits of not accepting them. The optimal reduction of epistemic uncertainties should be based on the maximization of the associated benefits, defined as the benefits brought by the new technology and the reduced uncertainties minus the costs invested to reduce such uncertainties. In such discounting exercises, the engineer should also properly weigh the potential benefits by their associated probabilities. Furthermore, there is uncertainty surrounding any estimation of costs and benefits; it is necessary to decide how to factor in the surrounding uncertainty when conducting or acting on the basis of a cost-benefit analysis. Equally important to recognize is that the determination of the optimal reduction of epistemic uncertainties involves complicated ethical questions. Judgments must be made regarding how costs and benefits are going to be conceptualized. For example, benefits and costs might be conceptualized as a function of economic gains and losses or as a function of enhancements or diminishments of individuals’ capabilities. Decisions must be made about whether the distribution of costs and benefits will be given independent consideration, or whether the aggregate total of costs and benefits will only be considered. There are furthermore questions about whose costs and benefits will count in a given calculation, such as direct stakeholders or anyone affected. In the latter case, a further question arises as to the weight that should be given to the costs and benefits for future generations that might be impacted.
It should be noted that when deciding whether to reduce a given epistemic uncertainty, engineers should consider the influence that such uncertainty has on the success of a solution to a given engineering problem and the successful protection of the public welfare. The more influence an epistemic uncertainty has on such successes, the greater the obligation to take steps to reduce it. For example, in geotechnical excavation problems, the number and locations of cone penetration tests (CPT) should be adequate to sufficiently remove the epistemic uncertainty related to the soil properties and characterize the soil properties with sufficient accuracy and completeness.6
There are different strategies that can be used to reduce epistemic uncertainties. Consider the uncertainties incurred in developing a model. Model inexactness can be reduced by developing more sophisticated models. Mistaken assumptions can be reduced by carefully evaluating underlying assumptions. Improving the precision of the measurement instrument can reduce measurement error. Collecting more data can reduce statistical uncertainty. There are also methods for reducing uncertainties in applying a model. Developing better and more informed decision-making processes can reduce volitional uncertainty. Simplifying and reducing human intervention, using tighter controls, and/or developing guidelines for the areas in which it is appropriate to use a given model may reduce human error. Finally, modeling in connection with radical design or a new product brings metadoxastic uncertainty. This type of uncertainty is reduced through the acquisition of additional knowledge about the behavior of engineering structures and the nature of natural phenomena. Thus, as engineers acquire experience with new products and innovative design, metadoxastic uncertainty will be reduced.
It may be necessary in some cases to prioritize the reduction of those epistemic uncertainties that are judged to have an important impact on the successful solution of an engineering problem and the protection of the public welfare. The sixth guideline is that engineers make such prioritization based on two factors: the size of the different kinds of uncertainties and the relationship between a given uncertainty and external constraints. Holding importance constant, a greater uncertainty should be prioritized over a smaller uncertainty. For example, if the statistical uncertainty is the smallest of all present uncertainties, then there is a reason to be concerned about reducing the other different kinds of epistemic uncertainty that are present. Turning to external constraints, reducing statistical uncertainty, for example, requires acquiring additional data which will cost money to obtain. Model inexactness, on the other hand, is reduced by developing more sophisticated models. Whether additional data should be acquired or more sophisticated models be developed depends on the cost and time involved in reducing these uncertainties, relative to the cost and time constraints.
The seventh guideline concerns metadoxastic uncertainties. Engineers must strive to keep up with scientific discoveries, as an additional potential resource for reducing the metadoxastic uncertainties surrounding a model. Furthermore, a method for accounting for metadoxastic uncertainties should be developed. One method for accounting for metadoxastic uncertainty could be to specify the degree of confidence in a particular analysis linguistically. That is, we could have categories of assessments of which we are, for example, “highly confident,” “confident,” or “less confident.” The basis for specifying confidence levels could be our general understanding of the problem, based on the comprehensiveness of both the models and the endodoxastic uncertainties accounted for in the analysis. The more comprehensive our knowledge, the more confidence we should have in the accuracy of the assessment. Work would need to be done to specify the degree of comprehensiveness for a specific linguistic label to apply. Such linguistic labels for confidence levels would serve two important functions. First, they would convey in a clearer and more straightforward manner that there is uncertainty regarding the accuracy of an analysis. This would keep in the mind of decision-makers that no analysis is error-proof. Second, levels can play a normative role in decision-making. A certain threshold of confidence about an analysis would need to be reached before action on the basis of the results of a model could appropriately be triggered. This would be done to ensure that scarce public (or private) resources would be utilized toward purposes that we are reasonably confident are necessary or would be effective.
The eighth guideline is that engineers should communicate to the public, in particular the portion of the public that will be exposed to the risks associated with the uncertainties, and to policy makers that there is uncertainty surrounding their work. Communication helps ensure transparency and encourages public scrutiny of their work.
The ninth and final guideline is that engineers continue to monitor their projects, in full recognition that the project’s success remains uncertain. Engineers should be attentive to the possibility of unforeseen problems and consequences arising and modify their design in ways that account for increased knowledge (Wetmore 2008).
There are a number of reasons why it is valuable and important for engineers to incorporate these guidelines in their research and practice. First, there are predictable negative consequences that arise from the failure to consider and respond effectively to uncertainty. Engineers may fail to actually satisfy the societal needs that drive the modeling process, if their analysis is inaccurate. Furthermore, engineers may waste or misallocate resources, on the one hand, or undermine safety, on the other, by not taking uncertainty into account in a responsible and appropriate manner. Second, the complexity surrounding engineering work will fail to be sufficiently appreciated, either by engineers or by the general public, if the uncertainty surrounding engineering analyses is not recognized. This can lead to a failure by engineers to seize the opportunity to craft creative and helpful responses to uncertainty. It also may lead engineers to ignore a factor that can have harmful consequences for the public, instead of trying to manage that factor. Furthermore, engineers are uniquely equipped to deal with problems that may arise with products produced or designed on the basis of models that have inherent uncertainty, because the fact of uncertainty is something they are cognizant of and have thought about how to deal with from the beginning. Finally, appreciating the uncertainty surrounding engineering modeling is also important for the public, including end-users and public policy and decision-makers, in order to ensure that the public forms reasonable expectations regarding the risks associated with new technology and engineering products and reasonable expectations regarding what we can demand from engineers in their work.
Risk and uncertainty, especially uncertainty inherent to engineering modeling, are a central part of engineering work. Modeling is important in both engineering and science, and the uncertainties associated with modeling give rise to important issues and problems in both disciplines. Even if scientific realism is not embraced, scientific work must, at a minimum, produce an understanding of nature that allows for accurate explanation and prediction of natural phenomena. Modeling can contribute to this goal, and epistemic uncertainties inherent in modeling must be eliminated, if possible. In engineering, the goal is not the explanation and prediction of natural phenomena, but the creation of artifacts and technology, and again modeling is important. We have offered a classification of the types of uncertainty in developing, applying and interpreting models generally and discussed some of the differences between science and engineering, emphasizing the various constraints on engineering work, many of which have a value dimension. Finally, we have considered some of the special problems associated with uncertainty in engineering modeling and suggested nine guidelines for dealing with such uncertainty. These guidelines are developed based on the central goal of engineering and the nature of different types of uncertainties in modeling.
Theories of Explanation, ed. Joseph C. Pitt (New York: Oxford University Press, 1988): 9–46. Pitt’s collections provide a good survey of major positions on explanation. To give an example of an alternative, Peter Railton rejects the Hempel account, holding that explanations must specify the causal mechanism that brings about an event. What matters in scientific explanation is not prediction, or the ability to state an explanation in terms of argument, but whether we can give a corrected description of the underlying causal mechanism that brings about the event we want to explain. (Pitt: 119–135).
Different kinds of models, such as theoretical, empirical, and phenomenological, differ according to the object they represent. Theoretical models represent a theory. Empirical models represent data. Models of phenomena offer a complex representation of an event or fact in the natural world; an example is a scale model of a bridge or the Bohr model of atom (Frigg and Hartmann 2006).
For the classic statement of technology as knowledge, see Edwin T. Layton, Jr., “Technology as Knowledge,” Technology and Culture 15(1): 31–41.
The distinction we are making is between engineering and science as disciplines and not between engineers and scientists as professional figures. An engineer in fact might be doing activities that are in the realm of engineering, science, and/or management.
The cone penetration test (CPT) is an in situ testing method used in geotechnical engineering to determine the soil properties and stratigraphy.
A draft of this paper was presented at the conference on ethics and modeling at the Delft University of Technology in Delft, The Netherlands, January 11–12, 2010. The authors are grateful for the very helpful comments they received. This research was partially supported by the Science, Technology, and Society Program of the National Science Foundation Grant (STS 0926025). Opinions and findings presented are those of the authors and do not necessarily reflect the views of the sponsor.