Advertisement

Modeling in Design for Values

  • Sjoerd D. ZwartEmail author
Living reference work entry

Abstract

This chapter addresses societal implications of models and modeling in engineering design. The more standard question about well-known technical and epistemic modeling values, such as safety and validity, will be left to the standard literature. The sections “Introduction” and “Values in Modeling: Framing and Standard Views” discuss relevant societal norms and values and the ways in which they are model related. Additionally, standard points of view are discussed about the value-ladenness of models. The section “Value-Related Issues Emerging in Model Building and Use” shows various ways in which engineering models may turn out to have unforeseen societal consequences. An important way to avoid such consequences and deliberately model for values in a positive sense is to take models as special kinds of artifacts. This perspective enables modelers to apply designer methods and techniques and view a modeling problem as in need of an explicit list of design specifications. Doing so, modelers may apply forms of stakeholder analysis and participatory design. Additionally, they may apply well-known, hierarchical means-end techniques to explicate and operationalize the relevant values; doing so, they support discussions about them within and outside the design team. Finally, the model-as-artifact perspective stimulates modelers to produce technical documentation and user guides, which will decrease the negative effects of improper use. The chapter ends with a checklist of issues, which the documentation should cover if a modeling for values is taken seriously.

Keywords

Model Value-ladenness Instrumental and derivative values Engineering, modeling, and societal and environmental values Accountability Affordance Model as artifact Modeling practices Participatory design, value identification, and implementation Value hierarchy Model documentation 

Introduction

In (2002), Jean-Pierre Brans encouraged all operations research (OR) professionals to take The Oath of Prometheus, which is his version of the Oath of Hippocrates, well known in the medical tradition. After having done so, the OR modeler as decision-maker should not only try to achieve her or his own private objectives but should also be committed to “the social, economic and ecological dimensions of the problems.” These objectives should be met “within the limits of sustainable development.” Moreover, the modeler should refuse to prove “information or tools, which in [her/is] opinion could endanger the social welfare of mankind and the ecological future of Earth.” This imperative of Brans is closely related to avoiding Robert Merton’s third possible cause of “unanticipated consequences of purposive social action,” viz., the “imperious immediacy of interest,” which will be discussed in the section “How to Identify and Address Value-Related Modeling Problems.” On the engineering side, the National Society of Professional Engineers (NSPE) expects its practitioners to exhibit the highest standards of honesty and integrity and act under the highest principles of ethical conduct. As engineering has a direct and vital impact on the quality of life for all people, it “must be dedicated to the protection of the public health, safety, and welfare.” According to the American Society of Mechanical Engineers, integrity and ethical conduct are core engineering values, just as are the diversity and respect, the dignity, and the culture of all people. According to engineering codes, practitioners should nurture and treasure the environment and our natural and man-made resources.

To fulfill the expectations of engineering societies, this chapter does not follow Brans. It will not formulate an Oath of Epimetheus or Hephaistos. Instead, it concentrates on forging a common ground for unforeseen value-related issues regarding model construction and its use on the one hand and values in engineering design on the other. To fulfill the NSPE requirements, for instance, design engineers should have an idea of where to look for value-related issues, and they should know how to identify and manage them. The purpose of this chapter therefore is to help these modelers to get to grips with these underexposed questions about value-ladenness of engineering models. Modeling for values in engineering design as just sketched is a large subject, and it will be delimited as follows. First, the chapter will not sketch an overview of how to achieve standard modeling values, such as verification, validation, robustness, etc. Regarding these well-known subjects, it will refer to the standard literature. Second, it does not embark upon surveying the literature on classical engineering values such as safety, risk, reliability, costs, and security. Third, we will not go into ethical questions about whether some explicit modeling purpose is morally acceptable or not. That is a general ethical question, which is not the topic of this chapter. Here, it is assumed that the purpose of the engineering model is morally acceptable.

Instead, this chapter embarks upon the questions of how to identify and solve the more hidden societal and environmental implications of modeling in engineering design. Answering these questions serves the purpose of helping model builders and users to develop more explicit ideas about the value-ladenness of model production and use. The latter involves topics like which kinds of value-related issues possibly emerge in engineering models, where to look for them, and how to address them proactively in the modeling process. To achieve this end, in the section “Values in Modeling: Framing and Standard Views,” we analyze the most relevant ideas and introduce some standard positions regarding the value-ladenness of models. Next, in the section “Value-Related Issues Emerging in Model Building and Use,” we will discuss some empirical findings. They present examples of unanticipated value-related issues in the practices of engineering design and how these values emerge in model construction and use. Then, in the section “How to Identify and Address Value-Related Modeling Problems,” we will discuss how to handle these values in a responsible way. The main advice in this chapter is to view models as special kinds of artifacts. Consequently, the method advocated here to design for values will be to take advantage of existing design methodologies while modeling. Here, we will consider, for instance, the four-phase design cycle to operationalize, put into effect, and document model-related values in a systematic way.

Values in Modeling: Framing and Standard Views

Models and Modeling in Engineering Design

Models come under all forms and sizes. Almost anyone can take almost everything to be a model of anything else. This is perhaps the reason that until today all endeavors to provide an explicit definition of a model using necessary and sufficient conditions have failed. In their “Models in Science” lemma in the Stanford Encyclopedia, Frigg and Hartmann (2012) do not even try to give a definition. Morgan and Morrison (1999) confess: “We have very little sense of what a model is in itself and how it is able to function in an autonomous way” (p. 8). In this chapter, models in engineering design are supposed to be (1) approximate (2) representations of the target system, which is the actual or future aspect of reality of our interest. Moreover, it is assumed that models are constructed and used for (3) an explicit goal that not always needs to be epistemic. Models may be used for constructions, for explorative purposes, for making decisions, for comparison, etc. In this chapter, “model” is taken to be a notion of family resemblance such as “game” or “science” for which the three characteristics mentioned are important ingredients.1 This model concept does not cover all the ways in which the model notion is used. Notably, people use the “model” notion in explorative contexts in which the representation element is less explicit such as in artificial life models or agent-based models. Sometimes “model” seems even to refer to something similar to a paradigm. This chapter does not cover these uses of the word model. Although the above model characterization may seem conservative, it nevertheless emphasizes the purpose of a model, which traditional engineering definitions often seem to ignore. Take, for instance, the IEEE 610 Standard Computer Dictionary. It defines a model as “[a]n approximation, representation, or idealization of selected aspects of the structure, behavior, operation, or other characteristics of a real-world process, concept, or system” (Geraci 1991, p. 132).

Embarking on the questions regarding values in modeling in engineering design requires some explicit framing of the relation between values, models, artifacts, their authors, and users. To sketch my frame, I start with a one-level description in which an artifact is produced and used. On this level, the artifact is the object (or its description) that comes out of a successful design process. The most obvious way values come about on this level is through the goals of the artifact: is the artifact made to promote the good or to inflict harm? Values also come in, however, because an artifact is the outcome of a design process, and this outcome is applied in society. Questions therefore arise, for example, about the designers properly considering all the relevant stakes or about the users neglecting the user plan and inflicting (un)intentionally societal harm by way of the artifact’s unintended application.

To finish the conceptual frame, I propose models to be special kinds of artifacts, which, in engineering design, aim at being applied to the design of another artifact. This results in a two-level description. The modeler produces a model, which will be used by the engineer, the user of the model, to produce a second artifact that again will be applied in society. In such a process, values come in various ways. We should consider the intrinsic values of the model and artifact and the instrumental values related to the production and the use of the model and the artifact.2 Moreover, the situation becomes even more complicated if we realize that the second artifact could be again a model to produce still another artifact or the model may be used for more than one artifact. I will leave these possibilities out as they can be reduced to the previous situation.

The two-level description reveals the complexity of the relation between values, modeling processes, and the products of these processes. Users apply models as a means to achieve some end, which again may be the construction of another artifact, or even a model, with again different values. The cascade of means-end relations introduces a gamut of values; in addition, all the model and artifact means-end relations are open to questions about collateral damage of the model or artifact and about more efficient ways to solve the design problem. Moreover, the actions of all the modelers and designers are amenable to normative assessments as well. In this chapter, we will observe that the traditional idea according to which professionally appropriate modeling will automatically produce morally correct models does not hold. The model-as-artifact perspective in combination with the means-end cascades undercuts this idea.

Two-level descriptions help to disentangle the ways in which models and values are related. Let us first consider the case where models are constructed as artifacts for their own sake, the first-level description. Models, then, are designed for some specific purpose and they should come with a manual. We may, therefore, at least distinguish between the intrinsic and the instrumental values of the model. The first may be historic, symbolic, esthetic, or financial values, etc., since they make the model good in themselves, and the second relate to the purpose of the model, such as decision-making, exploration, communication, simulation, etc. If on top of that a model is developed to design an artifact, the design of the model should be distinguished from that of the artifact. In such a case, we should consider at least three different ways in which values and models are related.

Like in the first-level case, in second-level descriptions, a model may have intrinsic and instrumental values, which relate to (and sometimes may even equate the intrinsic values of) the designed artifact. Third, however, and this is new in comparison with the first-level case, the instrumental values of the designed artifact often become distinctive consequences of the model with the aid of which this artifact is constructed. These may therefore be called the model’s derivative values. Consider the paradigmatic example of the internal combustion engine pollution designed by means of a model that relates and predicts the parameters of this engine (and its polluting features). The modeling for values in engineering design explored in this chapter mainly concerns the model’s instrumental and derivative values . The model’s normative impact is primarily considered due to its own instrumental values and to the instrumental values of the artifact it helps to develop.

Various Values

What are the values this chapter does focus on? Interestingly, the two codes in the Introduction put the values of honesty, integrity, and ethical conduct at the top of their lists. If we cannot count on modelers and engineers to respect these values, the discussion of modeling for values would not even get off the ground. Assuming these personal attributes of the main actors, we discern the following values among those that are generally seriously taken care of within engineering practices: safety, risk, reliability, security, effectiveness, and costs of the artifacts. I will call them the engineering values. Within the professional model building practices, among the values explicitly recognized are, at least, verification, validation, robustness, effectiveness, and adequateness of the model; I will refer to those by the term modeling values. As we take models to be special instances of artifacts, the first-level modeling values should directly serve the engineering ones, which is indeed the case. Many of them are directly related to the value reliability. Within a second-level description, the derivative values of a model also (indirectly) concern the instrumental values of the artifact that is based on that model such as this artifact’s safety, reliability, effectiveness, and costs.

Values less universally taken into account in the technical practices of modelers and engineers mainly cluster around three subjects: the quality of individual life, of social life, and of the environment. The first concerns the health and well-being of human beings (and animals), their freedom and autonomy, and their privacy. More specifically, even the user-friendliness of artifacts falls within this cluster. The second cluster of values involves the social welfare of humankind, protection of public health, equality among human beings, justice, and the diversity and dignity of the cultures of all people, and so on. Finally, the third set of values clusters around our natural (and even artificial) environment and concerns sustainability and durability, sustainable development, and the ecological future of the earth such that we should “nurture and treasure the environment and our natural and man-made resources.” Let us call these three clusters together the societal and environmental values .

As this chapter is concerned with the values related to models and modeling, let us consider the instrumental and derivative values of models in engineering design. First, we may consider the technical qualities of a model in isolation without considering the contents of its purpose. The model should be built to serve its purpose without breaking down. An important quality discussed in the literature at length is, for instance, the models’ verification. For a model as a set of equations, the latter implies, for instance, that these equations are dimensionally homogeneous, or for computer models this quality means the model should not have bugs. Other important technical model qualities are, for example, the model’s robustness – the model should behave smoothly under small external disturbances – or efficacy, which is the model’s ability to produce straightforwardly the desired effect. Besides these sheer technical properties, we can take the model’s goal into account. If the latter is epistemic, an important epistemic value is its validation, which in many contexts comes down to the question whether the model gives an approximately true account of reality. And to what extent the model’s predictions are approximately true determines its accuracy. Traditional modelers would probably also count the objectivity of a model as one of its epistemic values. The technical and epistemic values of models are first-level properties of the model itself and have been extensively investigated and described in the standard literature on models and modeling.3 The purpose of this chapter is to help model builders and users with issues of value-ladenness of model production and use. Consequently, regarding questions about first-level technical and epistemic values, I can with good conscience refer the reader to the standard literature.

A problem less frequently addressed in this literature and therefore part of this chapter is, for instance, how different values should be weighed against each other, provided that they are commensurable at all. For instance, how should avoidance of type I errors (claiming something is true whereas in fact it is false) be balanced against avoiding errors of type II (claiming something is false whereas in fact it is true)? In science, the first is considered much more important than the latter, but this need not be the case in societal contexts (Cranor 1990). To illustrate the technical and epistemic values of a model and the moral problem of balancing them against each other, let us consider the case of the ShotSpotter. 4

The ShotSpotter is a system that detects and positions gunshots by a net of microphones and is mostly used in US urban areas with a high crime rate. It is successful in drawing the attention of the police to gunshots. Trials suggest that people hardly report gunshots to the police, while the ShotSpotter immediately reports the time and place of a putative shot. Central to the system is a model that aims at distinguishing gunshots from other noises. If a sound largely fits the representative characteristics of a gunshot, the sound is reported as a gunshot. The model is well verified if it works well at every occasion it is drawn upon and never gets stuck in the process; it is effective if it does not take too much time to produce these reactions. If the model discriminates well between the sounds of shots of firearms and other but similar sounds, it is well validated, which implies that the model avoids type I and type II errors as much as possible. Statistically, however, avoiding errors of the first type implies an increase of the errors of the second type and vice versa. So, wanting to detect every gunshot implies many false positives, and decreasing false positives causes the police to miss more real gunshots. The question of the appropriate sensitivity of the model has therefore important societal implications, and its answer is not to be found in the technical literature (Cranor 1990).

Since models are artifacts, in principle all main engineering values mentioned might become important instrumental values for models as well. In the section “Value-Related Issues Emerging in Model Building and Use,” we will encounter some ways in which these values might become relevant in the modeling process. The modeler, who might even be the designer of the artifact, need not always be aware of the relevant derivative values. Note that many of these engineering values mentioned before are extensively taken care of in the standards of today’s engineering practices. As with modeling values, here again I will not sketch an overview of the extensive standard engineering literature. Instead, I will refer the interested reader to this literature.5 The same even holds mutatis mutandis for technology-implicated societal and environmental values , which may not rejoice itself at an extensive treatment in the engineering literature either. Even for these values, this chapter refrains from embarking on explaining how to model for such values explicitly. I will not help the reader in finding literature on, for instance, how to model for privacy or sustainability.6

As we saw, models may be related to technical, epistemic, and social/environmental values, which can be instrumental and derivative. The purpose of this chapter is now twofold. First, it is to sketch various ways in which modeling projects might be, or might become, value related in ways unanticipated by the modelers; in addition, it wants to show how projects may harbor overlooked tensions between those individual values. Second, it is to make the modelers properly address these tensions within the modeling team and possibly the users and externally with the client and other stakeholders.

Current Ideas About the Value-Ladenness of Models

Values are generally acknowledged to play a decisive role in engineering design.7 What role values exactly play in modeling, however, is still controversial. Scarce attention has been paid to the question of the value-ladenness of models in engineering design. This lack of interest is remarkable as soon as one considers the massive social impact of technology and the important role values play in engineering design, which is the heart of technology. Because of the scarcity in the specific literature, we will first discuss some opinions about the role of values found in the more general modeling literature.8

Despite its limited size, the relevant literature displays many different opinions about the value-ladenness of models. It displays outright deniers, more cautious admitters, and militant champions of the idea. To start with the first category within the context of operations research, the idea of objective and value-free models is, for instance, expressed by Warren Walker. He maintains that the “question of ‘ethics in modeling’ is really a question of quality control, … [and] … as analyst … the modeler must make sure that the models are as objective and value-free as possible” (1994, pp. 226–227). More recently, Walker claims, “… if applied operations researchers (acting as rational-style model based policy analysts, and not as policy analysts playing a different role or as policy advocates) use the scientific method and apply the generally accepted best practices of their profession, they will be acting in an ethical manner” (2009, p. 051), and he argues “that the question of ethics in modeling is mainly a question of quality control” (2009, p. 1054). A similar way to go is to maintain that models themselves are value-free, whereas their alleged value-ladenness is attributed to their goal. Kleijnen (2001), for instance, claims: “a mathematical model itself has no morals (neither does it have -say-color); a model is an abstract, mathematical entity that belongs to the immaterial world. The purpose of a model, however, does certainly have ethical implications” (2001, p. 224).

Not all operations research (OR) colleagues of Walker and Kleijnen agree. Marc Le Menestrel and Luc Van Wassenhove, for instance, are cautious admitters. In (2004), they distinguish between the traditional “ethics outside OR models” just described and the more modern and radical “ethics within OR models” where the various goals of the models are mutually weighted using multiple-criteria approaches. But because “there will always remain ethical issues beyond the model,” they opt for an “ethics beyond OR models” (2004, p. 480). By allowing and combining both quantitative and qualitative modeling methods, they argue that “analysts can adopt an objective approach to OR models while still being able to give subjective and ethical concerns the methodological place they deserve. Instead of looking for a quantification of these concerns, the methodology would aim at making them explicit through a discursive approach” (p. 480). Doing so, Le Menestrel and Van Wassenhove maintain that “we should make [the need for close and on-going communication between the model builder and user] explicitly part of the [modeling] methodology” (2004, p. 480).

According to some authors, Le Menestrel and Van Wassenhove do not go far enough and disagree with them about the strength of their argument that “there will always remain ethical issues beyond the model.” According to these proponents of the militant champions, we should opt for an unconditional “ethics within model” and acknowledge that models are inherently value-laden and that we should accordingly. According to Paul McNelis, for instance, “macroeconomic modeling … must explicitly build in and analyze the variables highlighted by populist models, such as wage and income inequality …” (1994, p. 5). Or as Ina Klaasen succinctly expresses the same issue: “Models are not value-free: moreover, they should not be” (2005, p. 181). From the perspective of science overall, Heather Douglas takes even a firmer stance and argues: “that because of inductive risk, or the risk of error, non-epistemic values are required in science wherever non-epistemic consequences of error should be considered. I use examples from dioxin studies to illustrate how non-epistemic consequences of error can and should be considered in the internal stages of science: choice of methodology, characterization of data, and interpretation of results” (2000, p. 559). More recently, she claims that “in many areas of science, particularly areas used to inform public policy decisions, science should not be value free, in the sense just described. In these areas of science, value-free science is neither an ideal nor an illusion. It is unacceptable science” (2007, p. 121).

Without going into the discussion between the deniers, admitters, and champions, let us make two conceptual observations. First, probably the meaning of the word “model” varies from one perspective to the other. Douglas and Klaasen obviously do not discuss Kleijnen’s uninterpreted “mathematical entity that belongs to the immaterial world.” They will consider mathematical models to be mathematical structures with a real-world interpretation. Moreover, these mathematical structures can mutually weigh various values, and with real-world interpretations we have values embedded in the model.9 Second, from the deniers’ perspective, the purpose of models is usually considered epistemic and models are thus similar to descriptive theories about the world much akin to Newton’s model of mechanics. In this model-as-theory concept, scarce room is left for normative considerations or values, which are often considered subjective. Engineers tend to take a similar point of view. They often view models as objective representations and as such consider them part of science rather than of engineering. The advocates of value-ladenness of models conceive models however to be instruments that assist in achieving some (often non-epistemic) goal. This model-as-instrument conception of model is almost inconceivable without leaving considerable room for values and evaluative considerations.

From the model-as-theory perspective, one may ask why a modeler should pay attention to the ethical and value aspects of her or his creation. How could we hold Isaac Newton responsible for the V-2 rockets that came down on London and Antwerp in the Second World War? In the first place and perhaps most importantly, modelers are normatively involved in the design process because they create specific affordances used by the designers during this process. According to Gibson, affordances are “offerings of nature” and “possibilities or opportunities” of how to act (1986, p. 18). They are “properties of things taken with reference to an observer” (1986, p. 137). Gibson extrapolated the scope of affordances and applied them also to technical artifacts such as tools, utensils, and weapons and even to industrial engineering such as large machines and biochemicals. As models are artifacts and create possibilities of how to act, saying that models create affordances is clearly within Gibson’s original use of the word. Affordances of artifacts, objects, or any instruments therefore can broadly be conceived as those actions or events that these artifacts, objects, or instruments offer us to do or to experience. Consequently, models afford us to get knowledge or to decide about design proposals, which perhaps even did not exist before the model was created.

Generally, we may say that creators of affordances are at least accountable for the consequences of these affordances. One may define accountability to be the moral obligation to account for what happened and one’s role in making or preventing it from happening. In particular, a person can be held accountable for X, if the person has (1) the capacity to act morally right (is a moral agent) and (2) has caused X and (3) X is wrong (Van de Poel 2011, p. 39). Accountability is to be distinguished from blameworthiness. An agent can be accountable for an event but need not be blameworthy as she or he can justifiably excuse herself or himself. Typical excuses are the impossibility to be knowledgeable about the consequences of the event, the lack of freedom to act differently, and the absence of the agent’s causal influence on the event (Van de Poel 2011, pp. 46–47). For instance, a manufacturer of firearms may be held accountable for the killing of innocent people. As the creator of the affordance to shoot, she or he may be asked about her of his role in the killings by the guns made in her or his company. The typical excuse of the manufacturer reads that she or he did not do the shooting and therefore is not (causally) responsible for the killing. In this sense, the creators of affordances are accountable but need not be blameworthy for the consequences of these affordances. Similarly, but often less dramatic, modelers are accountable for the consequences of the affordances, viz., the design, because they willingly brought into being the affordances of their models. Consequently, if the capacity, causation, and wrongdoing conditions are fulfilled, modelers are accountable for the properties of the final design and may even turn out to be blameworthy and should therefore pay attention to the normative aspects or their creations.

Value-Related Issues Emerging in Model Building and Use

In the present section, we will encounter various ways in which constructing models may have moral and societal consequences.10 These sketches serve the heuristic purpose for model builders to create awareness about where and how to look for the normative implications of the process and product of modeling. We will see that indeterminacy of the modeling question, the boundaries of the model, underdeterminacy or the complexity of the modeling problem, lack of knowledge and uncertainty, and finally embedded value assessments in the models may have unforeseen value-laden effects. We start with the situation where even the concept of the model does not exist yet and where it is even unclear what the target system should be. Then, we turn to the possible underdeterminacy of a model, and after that we will consider complexity and uncertainty as possible sources of normativity. We will end with the necessity to explicate the purpose of a model clearly and to communicate it.

Indeterminacy of Model Building Question and Model Boundaries

When a modeling process concerns an innovation, the initial formulation of the modeling problem is often vague and indeterminate, which means that the problem lacks a definite formulation. On top of that, even its conditions and limits may be unclear (Buchanan 1992, p. 16). Indeterminate problems have a structure that lacks definition and delineation. Herbert Simon (1973) calls these kinds of problems “ill structured.” At the outset of the problem solving process, an ill-structured problem has unknown means and ends. Often, modeling problems are ill structured to such an extent that they become “wicked problems” (Rittel and Webber 1973). Wicked problems have incomplete, contradictory, changing, and often even hardly recognizable requirements. Through interactions with the client and within the modeling team, these problems usually become better structured over time. Usually, however, they may leave open many acceptable solutions. This may be due to partial knowledge or understanding of the model requirements. It may also be due to the underdeterminacy of the target system when the physical laws applied do not fix the variables involved. Helping to separate the relevant from the irrelevant aspects of the modeled phenomenon has a normative side. Besides weighing epistemic values such as determining the real-world aspects that need to appear in the model, the definition of the modeling problem also fixes the scope on the societal effects taken into account. For instance, traditional thermodynamic optimization models for refrigerator coolants favored Freon-12 because it is stable, energy efficient, and nonflammable (in the volumes used). When the model includes sustainability, however, this coolant loses its attractiveness because of its propensity to deplete the ozone layer. In 1996, the USA banned its manufacture to meet the Montreal Protocol.

Interestingly, questions about values still arise even at the stage in which the modeling situation is determined and the model can sufficiently accurately describe the behavior of the fixed target system. The following simplified example shows how a seemingly straightforward application of a mass balance relates to societal norms and values. In most (bio)chemical processes, the goal of the process design is the conversion of one substance into another. Usually, the design requirements fix the product mass flux, i.e., how much substance is to be produced in a given time span. Let us for simplicity’s sake assume that the conversion rate of a reactor is 100 % and that conversion only depends on the reactor volume (V) and the reaction temperature (T). The steady-state mass balance of the reactor then can be modeled as is depicted in Fig. 1. Under these circumstances, clearly the modeled system is underdetermined and the design team is free to choose the reactor volume or the reaction temperature.
Fig. 1

Model of chemical conversion at steady state determined by temperature T and volume V

Despite its simplicity, the conversion case already raises interesting questions about societal consequences and thus value-ladenness. Although the modeling choice of volume and temperature is seemingly a factual affair, their trade-off has considerable derivatively value-laden implications. The larger the reactor, the larger the volume of the contained substance; the larger the possible amount of possibly spilled substance in case of leakages or other calamities, the larger the volume of substances managed during the shutdown and start-up of the plant. Moreover, extremely high or low temperatures can cause major hazards to operators and provide sustainability issues due to high energy requirements. Thus, fixing the temperature-volume region has societal implications regarding environmental issues, safety, and sustainability hazards. Does this validate the assertion that the flux model is value-laden, or should the responsibility for these value-related issues be exclusively put at the feed of the designers of the reactor – provided they are not the same persons?

Answering that some model is only a set of equations will not do. Following our characterizations of a mathematical model, the last includes rules of interpretation and therefore is more than just a set of mathematical equations. Nevertheless, the flux model does not explicitly embed a value judgment because it is silent about what combinations of volumes and temperatures are preferable. Quite the contrary, the model refrains from any direct value judgment and only describes the relation between the variables within some margins of error, and the only value involved is the model’s accuracy. However, the representation of the situation as only a physical relation between variables without safety and sustainability considerations already implies an evaluative stance. The model could have incorporated information about unsafe volumes and energy consumption. Thus, the choice of the model’s constitutive parts and the absence of reasonable upper and lower limits render the model value-laden in the derivative sense. The absence of societal aspect reflects the modelers’ judgment that they are not important enough to be considered. From the present considerations, we may conclude that the delineation of the modeling problem and the decision to put boundaries on the values of the model’s variables or not have social and environmental consequences.

Underdeterminacy of the Physical Situation

Even if the target system of a modeling procedure and the models’ boundaries are fixed by the design context, the model may be underdetermined, and underdeterminacy may also be a source of unnoticed normativity. Consider, for instance, the example of a model describing the output of an electrodialysis procedure used to regain acids and bases from a plant’s recycled waste water stream. In Zwart et al. (2013), we observed two steps in its related model development. The first version of the model merely represented the features of the electrodialysis. It described the relation between an electric current and a membrane area (at which the electrodialysis took place) for a fixed production of bases and acids. This first version of the model, however, was underdetermined since it failed to suggest which size of the membrane or which current was to be preferred. To come to a unique working point, the modelers added economic constraints to the model.

The second version of the model included therefore also economic equations, which allowed for calculating the membrane size by reducing the total cost of the process. The newly introduced considerations had a significant impact on the model’s characteristics. Whereas the first version of the model was merely descriptive, after the introduction of economic constraints, it became normative. After this introduction, the model could identify the optimal design but at the detriment of other values, such as safety and sustainability. If the modelers had considered sustainability considerations to fix the optimum, their model had perhaps come to a different preferred working point. These and similar examples show that model optimizing strategies are very likely to introduce, often unrecognized, normative issues.

Complexity, Lack of Knowledge, or Uncertainty

Besides the indeterminacy of modeling questions or the underdeterminacy of the physical description of the problem situation, we consider three additional sources of value-related issues: the complexity of the target system, lack of knowledge, and uncertainty about the behavior of the target system. Many modeling situations in engineering design are far too complex to be handled in all details at once. Design engineers apply different methods to cope with these situations and many of them have normative implications. We mention some of these: reducing the number of variables and constants in the model, neglecting the starting-up and shutdown phases, and carving up the problem in more manageable sub-modules.

First, reduction of the number of variables can be done by treating them as constants, and reducing the number of constants is helpful when the theoretical value of the constant is unknown and hard to establish in a reasonable time. In such situations, the value of the parameter is estimated or, sometimes, is just left out. The reduction of variables and constants will usually introduce inaccuracies. To illustrate this phenomenon, let us consider an example in which an enzyme is used as a catalyst for a biochemical conversion. To model the reaction rate r, various models are available; see Table 1 in which the k’s are different constants; [E], [S], and [P] are concentrations of reactants; and T is the temperature. Models with fewer constants and variables (the left and right columns of Table 1, respectively) become less accurate. The former ones have a smaller range of application, whereas the latter consider fewer dependencies. Notice that both the reduction of variables and constants result in a change of the values of the constants in the reduced equation.
Table 1

Differences in models’ complexity

 

Decreasing constants

Decreasing variables

1

\( r={k}_n.\left[E\right].\frac{\left[S\right]}{k_m+\left[S\right]}.\frac{{\left[S\right]}^2}{k_r} \)

\( r={k}_t.(T).\left[S\right].\frac{1}{\left[P\right]} \)

2

\( r={k}_i.\left[E\right].\frac{\left[S\right]}{k_m+\left[S\right]} \)

\( r={k}_p.\left[S\right].\frac{1}{\left[P\right]} \)

3

\( r={k}_s.\left[E\right].\left[S\right] \)

\( r={k}_f.\left[S\right] \)

The resulting models are value related for at least two reasons. First, the epistemic values of accuracy and generality are assumed to be of less value than the pragmatic and non-epistemic value of being able, pragmatically, to model the target system at all. Second, if, for example, temperature is left out of the equations, it cannot be taken into account anymore, when safety or reliability of the system turn out to be temperature dependent. The decision to leave temperature out, then, entails a derivative value judgment.

A second way, in which modelers avoid complexities in the modeling process, is to concentrate only on the steady state of the process, neglecting the modeling of the system’s start-up and shutdown phases. After all, static situations are much easier to model than dynamic ones. Only considering steady-state modeling, however, leads to neglect of the system’s dynamic behavior and substantially decreases the model’s range of application. From the viewpoint of safety, this neglect is undesirable as in practice the start-up and shutdown phases of large-scale (bio)chemical process are the most dangerous (Mannan 2005). The modeling decision to focus on the steady state and to neglect the start-up and shutdown dynamics has therefore normative implications in the derivative sense.

A third way to cope with the complexity of a modeling problem is to divide it into independent parts and try to solve the less intricate problems posed by those parts. This modularity in the modeling approach, however, sometimes poses its own hazards. In the electrodialysis example mentioned during the production of acids and bases from salts, some hydrogen and oxygen gases were produced at the electrodes. The conversion model neglected the production of hydrogen and oxygen, because of its minor impact on the efficiency and the mass balance. At a later design stage, however, when carrying out the hazard and operability (HAZOP) analysis, the modelers failed to recognize the hazards posed by the free production of hydrogen and oxygen together. It turned out therefore that although the simplifying assumptions were taken with great care and were harmless on small scale, when the design was scaled up, they posed a much larger risk.11 The electrodialysis example nicely illustrates the dangers of scaling within a modular approach and more generally the context dependency of value assessments.

Proper Use of the Model and Communication

In the examples of the previous section, the model builders were in close contact with the users of the model or were even identical to them. In situations where the users of the models are unfamiliar to the modelers, different kinds of problems emerge regarding the value relatedness of models. The first issue concerns the model’s use. When we take models as special kinds of artifacts, we recognize they may or may be not be used according to the modelers’ intentions. The first will be called proper, and the second improper use of the model. Proper use of artifacts is closely related to a second issue, viz., appropriate communication about the model’s proper use. Thus, instructions about proper use and the model’s normative dimensions require effective communication. While the importance of communication between modelers and other stakeholders is hardly denied in theory, it is often neglected in practice. Let us turn to two examples illustrating problems with improper use and insufficient communication: one exhibits instrumental values, and the other derivative ones.

The first example concerns geographic information systems (GIS) as a decision support system. These systems model the topography of a landscape, representing differences in height, water streams, water flow, and the type of landscape (e.g., forests or plains). Sometimes these hydrological models are used for decision-making in geo-engineering. However, as Jenkins and McCauley (2006) describe, the use of GIS procedures may raise problems because of the application of GIS, SINKS, and FILL routines. GIS programmers aim at approximating topographies in a simple way while keeping high data accuracy. To that end, they make two assumptions. First, they assume that rivers and water streams follow a fractal geometric structure, i.e., the whole river system has the same structure as its parts. This assumption increases the simplicity of the model, because it simplifies the recognition of river systems. According to the second assumption, local topography is flat. Consequently, isolated sinks or mounds are assumed to be noisy data. This assumption increases data accuracy because often depressed or elevated cells do not correspond to reality. Unfortunately, the assumptions taken together have the tendency to sort out valuable wetlands, which “provide important ecological services such as flood mitigation, groundwater recharge, nutrient processing, and habitat for unique flora and fauna” (Jenkins and McCauley 2006, p. 278). Assumption one sorts out wetlands, because they seem unconnected to the branches of rivers and are not recognized as a part of the water stream system. Assumption two does the same as wetlands are depressed areas. In combination, the two assumptions may even sort out comparatively large wetlands. Consequently, even when geo-engineers use GIS models to aim at decisions with low environmental impact, they may unintendedly opt for destroying wetlands if they are unaware of the mechanisms just mentioned.

Jenkins and McCauley suggest several solutions (2006, pp. 280–281). In their view, GIS programmers “could try to educate users about the limitations of some of the algorithms.” They may also let their programs always “produce output that more accurately reflects the actions undertaken to produce the model.” It would “help the end users better understand that GIS products are no more than [just] models of real ecosystems and landscapes.” The most effective solution according to the authors is the technical fix, viz., to let the programmers “change the assumptions in the current modeling algorithms to avoid assumptions that ‘fill’ isolated wetlands.” They also contend that the “menu-driven, point and click interface,” besides the more technical command line, increases the risk of accidents. “Organizations that provide GIS data layers and products (e.g., maps, databases) that include hydrological models described above should carefully examine the assumptions of the model, including ramifications and limits on the ethical use of models given those assumptions, and should prominently list those assumptions and ramifications in metadata associated with data and products.” Regarding the question of who bears responsibility, Jenkins and McCauley claim that as the “GIS programmers are in a position of power,” “[t]he locus of responsibility […] reside[s] mostly with the GIS programmers.” This assessment follows the ethical principle that a person with more power in a situation has increased responsibilities in that situation.

Although the GIS example just described may originate in specific concerns about the disappearing of wetlands, it nevertheless makes a clear example of the dangers produced by improper use of models. After all, the hydrological GIS models are constructed with routines for streaming and not for stagnant water. More recently, the issue of improper use of model and its ethical relevance has received more specific attention in the literature. Kijowski et al. (2013), for instance, gathered empirical information by consulting nineteen experts about their experiences with computational models. Based on this information, they discussed ways in which model builders and users may reduce improper uses of models.

The normative difficulties with the GIS example just mentioned relate to the model’s own instrumental values, which are directly related to the goal-related consequences of the model itself. The model is used to support decision-making and not to construct new artifacts. Jenkins and McCauley describe how the long distance between modeler and user may result in improper use with serious environmental consequences. In the next section (“Documentation About the Values”), we will see that in system engineering a long distance between designer and user (here the modeler and user) asks for extensive database documentation. When the lines between modeler and user are short or even nonexistent, the advice is to stick to the less extensive self-documentation. The combination of the two may still harbor dangers as the second example shows. It features a small distance between modelers and model users (the designers of the artifact) and a large distance between the designers and the users of the artifact based on the model – the Patriot case to be presented below. This combination results in derivative model values related to the safety of the artifact.

On the 25th of February 1991, a Patriot missile defense system at Dhahran, Saudi Arabia, failed to intercept a Scud rocket, due to an “inaccurate tracking calculation” in the trajectory model; tragically, 28 soldiers died and another 98 soldiers were wounded.12 The main problem was related to the model representing the rocket trajectories. Originally, the Patriots were designed to operate against armies with a highly developed intelligence, such as the former Soviet Army. Consequently, the missile systems were assumed to have to deal with enemy detection for only short periods. In the First Gulf War, the intelligence capabilities of the enemy were less sophisticated and therefore the Patriot systems were used over several consecutive days at several sites. Over longer periods, however, the approximations of the model calculating the Scud trajectories became less accurate, and after about 20 consecutive hours of operation without reboot they became useless. The main problem, the modeler’s decision to allocate insufficient bytes to represent the variable time, was not resolved before the failure happened, because the modelers “assumed … Patriot users were not running their systems for eight or more hours at a time” (Blair et al. 1992, p. 8), though several model updates had been made in the months before the failure. The modelers also sent out a warning that the Patriots were not to be used for “very long” periods. However, neither the users’ assessments nor the instructions for use had been explicit enough to prevent the incident (Blair et al. 1992).

Probably, the Scud incident would have been prevented by more explicit and intense communication between the model builders, the Patriot designers, and its users. If the designers had consulted the users more explicitly and if the modelers had informed the designers about the uselessness of the model after 20 h of operation, the incident would probably not have happened. The Patriot case provides a good example in which improved communication and transparency would have decreased the risk of the accident significantly.

Recently, various authors make a plea for improved communication among all the stakeholders involved in the development and use of models. For instance, Fleischmann and Wallace (2005) break a lance for transparency regarding the working mechanism of decision support models. They argue that “the outcome of modeling depends on both the technical attributes of the model and the relationships among the relevant actors. Increasing the transparency of the model can significantly improve these relationships.” The plea for more transparency and better communication between model builders and users is taken up and discussed by Shruti and Loui (2008). There, the authors use their seventh section “Communication Between Modelers and Users” to comment and elaborate Fleischmann and Wallace’s reasons to advocate transparency for model builders.

The ways in which models become value-laden and the examples discussed in this section closely relate to the subject “unintended (or unforeseen) consequences” in the social sciences. In his seminal (1936) paper, “The Unanticipated Consequences of Purposive Social Action,” Robert Merton distinguishes five causes for those consequences. The first is “lack of adequate knowledge” or the factor of ignorance, which he carefully distinguishes from “circumstances which are so complex and numerous that prediction of them is quite beyond our reach” (p. 900). This factor is clearly connecting to the section “Complexity, Lack of Knowledge, or Uncertainty.” Second, Merton identifies “error.” He writes: “the actor fails to recognize that procedures which have been successful in certain circumstances need not be so under any and all conditions.” The latter is related to not anticipating what happens with your model outside the specs, something for which we can hold the modelers accountable for the Patriot-system accident. Also the GIS example is related to the case of improper use. Jenkins and McCauley’s (2006) subtitle justly reads: “unintended consequences in algorithm development and use.” As a third factor, Merton mentions “imperious immediacy of interest,” which he describes as “the actor’s paramount concern with the foreseen immediate consequences excludes the consideration of further or other consequences of the same act.” This factor is similar to the “collateral damage” we discussed in the section “Models and Modeling in Engineering Design,” and we will come back to it in the section “How to Identify and Address Value-Related Modeling Problems.” Besides Merton’s fourth factor “basic values,” which may have detrimental effects in the long term, he calls on as a fifth reason something like self-defeating prophecies. He describes them as: “Public predictions of future social developments are frequently not sustained precisely because the prediction has become a new element in the concrete situation, thus tending to change the initial course of developments” (p. 903/4). We did not encounter examples of self-defeating prophecies here, but the phenomenon of self-defeating and self-fulfilling prophecies is highly relevant for modelers building models’ policy decision support.

How to Identify and Address Value-Related Modeling Problems

This section is dedicated to the practices of modeling for values in engineering design. More specifically, we will discuss the question of how modelers may identify the most relevant instrumental and derivative values in an engineering design modeling process. Only identifying these values however does not suffice. Modelers should have ways to find out how to manage and to realize these values. Even this is insufficient. In the end, during the evaluation process, the modeler should also be concerned about the aftercare of her or his model and the related artifact. She or he at least should take care of an adequate user plan and sufficient communication with the users about the model. How are we to structure this process of modeling for values? An important way to go is to apply more explicitly engineering design methodologies to modeling itself. Acknowledging models to be special kinds of artifacts with well-elaborated design specifications and specified goal will help the modeler to manage systematically the values (possibly) involved in her or his modeling assignment.

When a modeler explicitly follows the outline of some design methodology, it will support the modeling process for at least two reasons. First, it frees her or his mind from the traditional idea that the purpose of the model is always epistemic and makes her or him realize that models can, and often do, have other goals. Second, applications of proven design practices help the modeler to think outside the constraints “given by the design commissioner.”

Design methodologies exist in many forms and sizes; the one we will follow is inspired by Jones (1992), Roozenburg and Eekels (1995), and Cross (2008). They take design methodologies as combinations of some distinguishable phases.13 First, we identify a diverging phase of analysis, in which the design brief is analyzed, redefined, and perhaps subdivided in subproblems and in which the design specification is determined. In this phase, the goals are set and the design specifications are operationalized. The second is a transforming synthetic phase where the working principle is chosen and the design is built up from its parts; it results in a provisional design, models, or even prototypes. Third, with the provisional solution at their disposal, in the simulation phase, the design team finds out, through reasoning or experimentation, how much the prototype exhibits the required behavior. Fourth, in the evaluation phase, it is decided whether the prototype fulfills the requirements well enough or whether it requires optimization or even replacement by a new design proposal. If the second holds, the design cycle should be applied again until the outcome satisfies the design specifications. When the design process has finished, the design team should communicate about the design with the outside world and provide extensive technical documentation.

The application of the four-phase design cycle to model construction has two important consequences. First, a model should come with a design brief and design specification; its purpose and functionalities should be stated, such as its limits, the preconditions of working conditions, and its domain of application. Its design specification should operationalize the most important requirements including all its instrumental values. Second, when a model is developed for engineering design, some members of the modeling team should also participate in the design cycle of the artifact for which the model was built. Doing so, the modeling team can be sure that the model design specifications cover the most important derivative values and the model adequately manages the value-related issues in an artifact design.

Focusing on model values, the next subsections loosely fit the phases just described above. The section “Identifying and Accounting for Instrumental and Derivative Values” instantiates the analysis phase in which the model design brief should be specified and the most important instrumental and derivative model values should be identified and operationalized. Similar to the synthetic phase, the section “Operationalization and Implementation of Values” reports on one possible way to operationalize different values in a model building context. It discusses Van de Poel’s (2013) values hierarchy. Finally, parallel to the evaluation phase, the section “Documentation About the Values” will discuss the aftercare for the model once the modeling is done. It stresses the importance of documentation and communication of all the value-related issues concerning the final outcomes of the modeling process.

Identifying and Accounting for Instrumental and Derivative Values

The application of the design-cycle perspective to the modeling process with the focus on values yields interesting observations. Let us start with the model design brief, which states the model design problem. First, this commission should explicate the modeling problem itself, the owner of this problem, and it should clearly state the goal of the model. Second, the commission should also explicate the context of application. If we consider the instrumental values, the commission should elaborate on the values directly involved in the model purpose and on the values more indirectly involved in the modeling problem. Because making a model is an action, the modeling team should list all the possible answers to the modeling problem and should proactively think about the possible “collateral damage” of the various actions and models involved. Moreover, the commission should at least identify and discuss the possible tensions and incompatibilities between the values involved. To come to the derivative values of the model, the same questions have to be asked about the artifact to be designed.

When we turn to the model design specifications, the application of the design cycle again provides important value-related insights regarding the model’s instrumental and derivative values. Overall, the design specification elaborates the objectives of the object or process to be designed. It is the list of criteria fixing the extent to which the designed object fulfills its purpose. The design specification is therefore the main instrument for assessing possible design proposals; this is its main function. Applied to modeling for values, the model design specification is the most appropriate place to state and elaborate to what extent the model satisfies its instrumental and derivative values. Regarding the model’s instrumental values, the list should specify its technical values and, if its goal is knowledge, its epistemic ones. Even if the model does not serve the purpose of constructing an artifact, this does not suffice. Stand-alone models often have societal and environment consequences; the GIS case with the SINKS and FILL routines provides a telling example. And as the Oath of Prometheus, mentioned in the Introduction, explicitly states, the modeler or modeling team should also think proactively about all what we called the societal and environmental values possibly involved in their models.

When the aim of a model is the construction of a technical artifact, the modeling team should at least consider artifact design brief and design specification to come to the relevant derivative values. Moreover, it should participate in the various iteration of the artifact design cycle to keep track of changes in the artifact design specification and values involved. In the electrodialysis example, this participation and the attempt to put all values concerned in the model design specification would have revealed that the introduction of costs could possibly be at the detriment of societal and environmental values, and in the ShotSpotter example it would have explicated more clearly the weighing of type I against type II errors. In the next subsection, we will return to the question of how to organize the values within a model design specification.

As a design specification needs to be as complete as possible, application of the design cycle to modeling for values implies the quest for a list of instrumental and derived modeling values that is as complete as possible. Moreover, when a supposedly complete set of those values has been gathered, the question arises of how these values should be weighed against one another. Here again I would like to draw upon the inheritance of design practitioners. To come to a complete-as-possible list of values and decisions about how to weigh them, modelers should follow designers who, in various democratization movements, consult the most relevant stakeholders to establish the design specifications and their relative weights.

One of these movements originated in Scandinavia of the 1970s and was driven by the urge to let the users cooperate with the designers within the design process. For that reason, it was called cooperative design. When exported to the USA for political reasons, the name was changed to participatory design (Schuler and Namioka 1993). Participatory designers advocate an active involvement of all stakeholders (such as designers, clients, regulators, users, fundraisers, and engineering firms).14 Science and technology studies have been another breeding ground for the appeal for increased democratization in technological design. After allegedly having “falsified” the traditional perspective that technology has a determining influence on society, sociologists of technology claimed to have shown that the successful functioning of artifacts is a social construction. Along this constructivist line of thinking, they advocated more participation of citizens in technological developments and design (see, e.g., Bijker 1995; Feenberg 2006). The developments in the participatory-design and democratization process came together in a special issue of Design Issues in the summer of 2004 reporting on the symposium “An STS Focus on Design.” In this issue, Dean Nieusma writes: “participatory decision making is (1) fairer and (2) more intelligent than nonparticipatory processes.” To show participatory design is fairer, Nieusma cites Schuler and Namioka (1993, p. xii) who say: “people who are affected by a decision or event should have an opportunity to influence it.”

To elaborate the last statement, Diekmann and Zwart (2013) interpret the democratization movements in modeling and design to be valuable steps into the direction of modeling for justice, i.e., reaching an overlapping design consensus possibly with all the stakeholders involved. This consensus provides a foundation for value decisions that is morally more justified than just letting the modelers and designer balance the values they identified or elaborate cost-benefit analyses. In the same vein, Fleischmann and Wallace discuss in (2005) the stakes of the “various actors involved in decision support: the modelers, the clients, the users, and those affected by the model” (see also Fleischmann and Wallace 2009).

A difficult but unavoidable question here reads: But who are the relevant stakeholders?15 Overall, stakeholders are those who have an interest in the design, such as customers, consumer organisms, producing companies, suppliers, transporters, traders, government, etc. However, who is to decide, who has a genuine interest, and whether an alleged interest is important enough to establish a stakeholder? Not every self-proclaimed interest suffices to qualify as a relevant stakeholder. After all, is someone who wants all airplanes to be yellow a stakeholder in airplane design? What shall we decide about future stakeholders? Is a healthy person who will be seriously sick in 10 years’ time a stakeholder in the design of medicine? Are future state citizens stakeholders in the decision about a new nuclear energy plant? Questions arise about voluntariness as well. Parties that are involuntarily affected by the design may rightfully be called stakeholders. Consequently, pedestrians are stakeholders in car design and perhaps even more than the future car owners. After all, the customers can refrain from buying the car, whereas pedestrians are potential victims in accidents and cannot choose whether they want to be hit by the car.16

Besides consulting stakeholders, designers also carry out life cycle analyses to complete the design specification, and they consult standard checklists, such as those of Hubka and Eder (1988, p. 116), Pahl and Beitz (1984, p. 54), and Pugh (1990, pp. 48–64). These lists may also be useful for finding unidentified model values. The identification of these values is important but only a necessary condition for a satisfactory value design specification. In the next section, we will come to the question of how the various values could be organized in a design specification to come to an operationalized set of values.

Operationalization and Implementation of Values

Although the completeness of the design specification regarding values is an important necessary condition for modeling for values, it is not sufficient. A large set of partially interdependent values, norms, and design specifications without any structure would be very impractical and too difficult to manage and adjust during the modeling and design process. In contrast and addition to the completeness of the model-related set of values, this set should avoid redundancy and promote independencies among its values. Besides the tension between completeness and nonredundancy, the third quality constraint for the values in the model’s design specification is the clearness of their meaning and the way they are operationalized. As values are often abstract concepts, their meaning in a specific context should be explicated, such that the extent to which the model or design fulfills the value design criteria can be assessed intersubjectively. In other words, the value criteria in the design specification should be testable. Finally, the modelers should take care that the proposed way to operationalize the abstract values are valid, that is, whether they still carry largely the same meaning of the abstract values they started with at the outset. To serve the purpose of nonredundancy, appropriately operationalized values, and validity, we will consider Van de Poel’s (2013) method of values hierarchy.

To come to a set of valid, intersubjectively operationalized and testable, complete but nonredundant design requirements, the design literature often uses the instrument of a hierarchical tree of design objectives (e.g., Cross 2008, pp. 65–71; Roozenburg and Eekels 1995, pp. 141–143). At the top of those trees, the most general and abstract objectives of the artifact are situated, and the lower nodes refer to subgoals that should be reached to serve the final goal at the top. According to Cross (2008), for instance, an intermediate means serving the goal of a safe new transport system is: “a low number of deaths.” This last intermediate objective is again served by the means of a “high speed of medical response to accidents” (p. 69). The objective trees or means-end hierarchies normally contain various branches and many nodes where the lower nodes are the means that contribute to the ends in the nodes on the higher layers.

Besides the pragmatic, how-to-act, means-end aspect just explained, we may distinguish at least two other, largely independent, dimensions along the edges of the objectives tree. The first is a semantic one. From top to bottom, the notions in the nodes of the tree vary from abstract to concrete, and the lower-level nodes operationalize the higher-level ones. From this semantic perspective, the tree explicates what the higher-level objectives mean in relation to the artifact and its context. For instance, “safe” in relation to a transport system may be operationalized among other things with “low number of deaths.” To show that the pragmatic aspect of “safe” in the tree differs from the semantic one, we need only realize the following. “High speed of medical response to accidents” serves the purpose of “low number of deaths,” which serves the purpose of being a “safer transport system.” We can hardly claim however that high speed of medical response to accidents makes the transport system safer. Apparently, the pragmatic aspects along the edges of the tree are transitive where the semantic perspective sometimes lacks this property. Besides pragmatics and semantics, we may also distinguish a value dimension along the branches of the tree. Every node, but the highest one, has instrumental value for connected nodes higher in the tree, and the highest node has only an intrinsic value. Normally, the weight of the node values varies with their level in the tree – the higher, the more important. The lowest one may even have a negative value, so that we come to situations where “the end justifies the means.” Although not completely unrelated, the means-end and the value dimensions in the tree are not identical and need to be distinguished.

An interesting proposal to systematize and explicate modeling for values can be drawn from Van de Poel (2013). Van de Poel combines the three dimensions of the designers’ objectives tree to operationalize the abstract values at the top of the tree using the values of the leaves at the bottom. Since models are artifacts, his approach is also relevant for model builders. To realize abstract values in a design, Van de Poel introduces values hierarchies , which consist of three basic layers: the abstract values relevant for the artifact reside at the top layer; the middle layer consists of all general norms, which are considered as “prescriptions for, and restrictions on, action” (p. 258); and the bottom layer consists of the design requirements. Van de Poel considers two criteria for the top-down operationalization of abstract values based on norms: “the norm should count as an appropriate response to the value” and “the norm, or set of norms, is sufficient to properly respond to or engage with the value.” In a second step, these norms are specified with the aid of design specifications. This step may concern the explication of the goal, the context, and the action or the means.

Bottom-up values hierarchies are built up from for-the-sake-of relations. Design requirements serve the purposes of certain norms, which on their turn are built in for the sake of the final and abstract norms. Van de Poel discusses the example of chicken husbandry systems. There, general value animal welfare is served by the general norms: presence of laying nests, litter, perches, and enough living space. On its turn, these norms are realized by design requirements such as at least 450 cm2 floor area per hen, 10 cm of feeding trough per bird, and floor-slope of maximal 14 %.

As we saw in the first section, engineers already design for general values such as safety, sustainability, and even privacy. The point is, however, the following. If modelers (and engineers) were to introduce values hierarchies as an instrument to realize these values explicitly, the way the model serves certain values and avoids negative ones would be much more explicit, debatable, and open for corrections and improvements. Surely, values hierarchies do not solve all value conflicts in modeling and design, but at least they explicate and systematize the value judgments involved in modeling and design. By doing so, one renders these judgments transparent for discussion in internal and external debates.17 This transparency of the values implied by models and artifacts is a necessary condition for launching new models and artifacts into a civilized and democratic society.

Although this chapter emphasizes the parallels between the methods applied by designers of technical artifacts and modelers, who model for societal values, we should not forget an important relative difference between the two. Designers deciding about pure technological matters have a wealth of scientific and engineering literature to consult and experiments to carry out; they have much more objective or intersubjective knowledge about the world to fall back on than modelers, managers, and politicians who should decide about the societal and environmental effects of a model. Take, for example, the design of some alloy steel with a certain strength, corrosion resistance, and extreme temperature stability for nuclear reactors. The designer who decides how to design steel with the required specifications can fall back on material science and can carry out experiments. Her or his design will be based on a wealth of intersubjective knowledge and experience. This decision process is normative but backed up more strongly by scientific knowledge than decisions concerning the societal, political, and environmental impacts of steel production. Questions such as should nuclear reactors be made at all and, if so, which rare earth metals should be used and in which countries we can find these metals are less straightforwardly backed up by science. In one word, societal and environmental values are backed up with far less objective and generally accepted knowledge than scientific and technological ones. The same holds mutatis mutandis for modelers and their models.

Because knowledge about technical values differs from knowledge about societal and environmental values, the question arises of who should decide about the latter ones. Engineering modelers seem the most appropriate for making strict technical decisions about modeling and design. Are they however also the ones that should take the decisions regarding societal and environmental issues? Since some designers have concluded that this question should be answered negatively, they initiated the democratization movements mentioned above. Mutatis mutandis, the same could be said about modeling. The values-hierarchy method can be carried out by a modeler or a modeling team. Nevertheless, it is to be preferred and less paternalistic or even more democratic, and even more just, when the value decisions are taken by all stakeholders involved.

Documentation About the Values

In the previous section, we discussed an instrument helping to take into account values as explicitly, transparently, and systematically as possible. The GIS and Patriot examples show that if modelers want to evade societal and environmental accidents, their task does not end with only applying this or similar instruments. They should stimulate and further the value debate among colleagues, users, and other stakeholders. To avoid accidents, modelers should also provide extensive technical documentation and user manuals. In addition, one could even argue that parallel to the designers of artifacts, they should provide aftercare for their creations and should evaluate how their products function in the real world.

In most general terms, following a design methodology such as that proposed in the fifth chapter of Whitaker and Mancini (2012) would enable modelers to document value-sensitive decisions made during the design of the model in the same sense. Whitaker and Mancini state that the documentation production in system engineering follows the four-design-phase cycle mentioned before. After having discussed the iterative nature of design processes, they claim:

“At each stage in the process, a decision is made whether to accept, make changes, or return to an earlier stage of the process and produce new documentation. The result of this activity is documentation that fully describes all system elements and that can be used to develop and produce the elements of the system.” Whitaker and Mancini (2012, p. 69)

Identifying and following the same stages in the modeling process, modelers could hold track of the decisions they make about envisaged instrumental and even derived values involved. Doing so, they could provide the model users and other stakeholders with technical documentation about their analyses and decisions regarding these values embedded and implied by their creations.

As we have done previously, here again we should distinguish between modeling situations in which the modeling and design teams are close or even identical and those in which the distance between the two is much larger. In the first case, the modelers should be at least as much engaged in the design development phases of the artifact as in those of the model. To produce adequate value documentation, the modelers should be acquainted with the value assessments of the designers and with their ideas about proper and improper use of their artifact in practice. For the modelers, then, the emphasis is on the derivative values of their models; the Patriot case provides a telling example. In the second case, characterized by a large distance between the modeler and user, this emphasis is more on the instrumental or goal-related values of the models. Modelers should therefore support and document the value assessments following their own modeling methodology – the GIS case provides a good example. The first situation, with a small distance between modelers and users, compares with what in the technical-documentation literature has been called self-documentation, characterized by small enterprises and close collaboration (Baumgartner and Baun 2005, p. 2384); the second one, which relates to a large enterprise, more complex tasks and a large distance between the collaborators, requires database documentation (idem, p. 2385).

Besides the general documentation discussions, checklists of items that such documentation should cover are helpful. Below, I attempt to set up such a list without claiming its entries are necessary and the list is sufficient or complete. This first attempt should be read as an invitation to modelers and colleagues to discuss and elaborate, such that we come to a more mature list ratified by modelers and analysts studying modeling practices . Setting up the list, I envisaged the distance between the modelers and the users to be considerable, and following the list one is likely to end with a value description that is more like a database than self-documentation. The list features mainly instrumental values and general derivative ones because concrete ones depend too much on the details of the artifact and its context. The issues that follow emanate mainly from definitional characteristics, engineering and model building practices, and societal and environmental values.

To my mind the values within the model documentation should at least cover the following items (their origins are mentioned between brackets):
  1. 1.

    Clear indications about the purpose of the model (i.e., what it is made for and what is its function) and a description of its proper use (i.e., how it should be used according to its makers to achieve the goal of using it) (purposes)

     
  2. 2.

    The list of the model’s design specifications and clear descriptions of how, and in which context, the model should be applied within its window of application (purpose)

     
  3. 3.

    Indications about the model’s limitations, its abstractions, and its assumptions (approximation and representation)

     
  4. 4.

    Clear indications about the model’s technical properties and behavior such as its robustness and efficacy, and information about how the model was verified (technical values)

     
  5. 5.

    Clear indications about the model’s accuracy and about its validation (epistemic values)

     
  6. 6.

    A clear description of the tensions between various values in the model (and the resulting artifact) and what choices have been made to cope with them (transparency and communication)

     
  7. 7.

    Indications about how the model (with or without the supported artifact) copes with the engineering values such as safety, risk, reliability, security, effectiveness, and the costs, within and even outside the specs

     
  8. 8.

    Statements about how the model in isolation or in combination with the intended artifact takes into account societal and environmental values such as the quality of individual life, of social life, and of the environment

     
  9. 9.

    Descriptions of how the models have applied defensive design methodologies to make the model (or model artifact combination) foolproof and thus prospectively prevent possible model accidents, which may be due to all kinds of misuse such as application outside the specs or use for other purposes than intended

     
  10. 10.

    Plans about how the introduction of the model will be monitored, whether or not in combination with its artifact, and possibly adapted, adjusted, or improved when it turns out that its use implicates negative societal consequences (aftercare)

     

Entry 1 is needed to learn about the proper use of the model and to delimit the scope of the model’s application. Some models are developed for general applications, while others are optimized only under specific conditions. Ad 2. The explicit list of the model’s design specifications and, if very complex, a summary of this list enable close and distant users to learn about the details of the model’s scope of application. Entry 3 should be covered because as models are approximate representations, they need to leave out many issues of the represented target system. If these left-out issues are not acknowledged in the documentation, model users might have false beliefs about the abilities of a model. The model’s abstractions are closely related to its assumptions. Every model incorporates a variety of assumptions that have fundamental impact on the performance of a model. Items 4 and 5 are necessary for a responsible launch of the model in society.

Ad 6. For transparency’s sake, all designs have to cope with tensions between values, and the design of models makes no exception. The value documentation should explicate which tensions the modelers and designers have considered and how they managed these tensions using which arguments. For instance, which compromise between privacy and efficacy has been chosen? How are errors of type I and type II balanced and for what reasons? Ad 7. Since technical and engineering values may conflict with societal and environment ones, modelers should explicate all of them to chart these tensions and their choices. Entry 8 requires modelers and designers to explicate what they did to anticipate all relevant societal and environmental issues. Ad 9. In principle, defensive design comes down to anticipating all ways in which an artifact can be misused and blocking misuse or reducing the damage by adequate design. Of course, according to Murphy’s law, complete foolproof models and artifacts do not exist. Douglas Adams wrote succinctly: “common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools” (Adams 2009, p. 113). Finally, in item 10, modelers should explain their plans for aftercare.

The checklist illustrates that the design, development, and the introduction of complicated models and artifacts in society are a complicated combination of retro- and prospective processes. Technical (4) and epistemic properties (5) of a model are often only assessed in a backward-looking way. Planning the aftercare (10) and assessing the possible impact of the model or model artifact combination (9) are clearly forward looking. Other issues are combinations of the two. Establishing the engineering values (7), for instance, is based on past experience, but the model’s performance regarding these values will never be completely certain and should be monitored when being used in practice. Even the models’ abstractions (3), their specifications (2), and even (1) their exact proper uses are not for always fixed in advance. Research in engineering practices such as of Downey (1998) and Vinck (2003) but especially that of Bucciarelli (1994) clearly shows that in real life engineering design is a social practice. The lists of specifications, the exact purpose of the design, and even the working principles might drastically change during the design process and for social-technical systems even after the design has been launched into society. This adaptation of the specs during and even after the design process makes Bucciarelli and Kroes (2014) claim that instrumental rationality falls short of describing the engineering design in practice. The forward-looking aspect of introducing models and artifact in society makes it partly an open process.

The openness of the development and introduction of technical models, stand-alone or underlying a technical artifact, is the reason for the traditional deniers’ argument of value-free models to fail. Model development and introduction is a dynamical and open process and is an interaction between many stakeholders. The model’s specs, its proper use, and the consequences of improper use are partly un- and underdetermined, and many of these consequences only become clear once the artifact is launched into society. Because modelers are the most acquainted with the model’s behavior within and outside the specs, they are at least accountable for determining the specs, the choice of the borderlines between the proper and improper use, and other value-related issues. The openness of launching artifacts or models into society obliges their creators to participate in a prospective process of investigating how their model might cause damage to persons, society, or the environment. Considering, for example, the GIS and the Patriot cases, modelers and designers should take the lessons at heart of Merton and his followers about unforeseen or unanticipated circumstances. Ample and comprehensive model documentation helps to prevent the modeler to become blameworthy regarding possible damage inflicted by her or his artifact. Moreover, and perhaps even more importantly, it helps to make the discussion about the target values of the modeling process much more explicit and transparent; by doing so, it justifies and democratizes the processes of modeling and engineering design.

Summary and Conclusions

This chapter has been mainly about the instrumental and derivative values of models in engineering design. For the common technical and epistemic modeling values such as safety and validity, it referred to the standard literature. We focused mainly on issues regarding how social and environment values emerge unnoticed during the process of developing and applying the model. Because launching models and their affordances into society is an open process, we argued that modelers have more responsibility than just the behavior within the specs. Next, we showed how various forms of indeterminacy, underdeterminacy, complexity, and lack of communication about proper use may have (often unnoticed) value-laden consequences in practice. An important way to model for values is then to take models explicitly to be special kinds of artifacts and apply various design methodologies. This way of interpreting the modeling job enables the modeler to apply methods and techniques from design and to view a modeling problem as a multiple-criteria problem, which is in need of an explicit list of design specifications including value-related issues. To find all these values, modelers may apply forms of stakeholder analysis and participatory design. Additionally, they can apply hierarchical means-end trees to explicate and operationalize the values and their mutual tensions involved in the modeling job, supporting their internal and external discussions. Finally, the model-as-artifact perspective helps modelers to sustain this discussion by producing technical documentation and user guides during the various phases of the modeling (design) process. The chapter ended with a checklist of issues, which the documentation should encompass if a modeling team wants to make a start with taking modeling for values seriously. May this chapter be a first step toward more comprehensive methods and lists for managing societal values in modeling engineering design.

Modelers should realize that they can, and often should, model for certain values, not the least, because they are accountable for negative (and positive) societal implications of their creations. Because of this, they should not only take care of the functional, technical, and engineering values of their creations. They should also proactively spot unanticipated societal implications of their contrivances. To systematize the instrumental and derivative values and their tensions, they can apply values hierarchies to manage, realize, and document these values in their work. In doing so, modelers would render the value-ladenness of their work more transparent and would contribute substantially to the internal and public debate about the social and environmental consequences of their models.

Cross-References

Footnotes

  1. 1.

    Note that according to my characterization, a mathematical model “is not merely a set of (uninterpreted) mathematical equations, theorems and definitions” (Gelfert 2009, p. 502). They include their interpretation rules that define the relation between the equations and some features of the target system. “Mathematical model” is therefore a thick concept.

  2. 2.

    In this chapter, I will adopt Frankena’s (1973) definition of intrinsic and instrumental values. The first are “things that are good in themselves or good because of their own intrinsic properties,” and the last are “things that are good because they are means to what is good” (p. 54).

  3. 3.

    See, e.g., Zeigler et al. (2000); Sargent (2005); Barlas (1996); Rykiel (1996), etc.

  4. 4.

    The example is from Shelley (2011) who discusses several examples of technological design with conflicting interests.

  5. 5.

    Such as Haimes (2005)

  6. 6.

    As models are special kinds of artifacts, many chapters in the present handbook discuss the engineering, societal, and environmental values mentioned in this section and more. They provide important starting points for the standard literature I have been referring to.

  7. 7.

    See, for instance, Pahl and Beitz (1984); Pugh (1990), Jones (1992); Roozenburg and Eekels (1995); Cross (2008).

  8. 8.

    Relevant literature originates in investigations into ethics in operations research and in values in computational models.

  9. 9.

    For more on the difference between embedded and implied values in models, see Zwart et al. (2013).

  10. 10.

    The examples in this section come from participatory research reported more in detail in Zwart et al. (2013).

  11. 11.

    The 1991 Sleipner case shows that inattentive downscaling also can cause catastrophes. See Selby et al. (1997) for the details of how a concrete offshore platform collapsed due to incorrect downscaling of an FEM model.

  12. 12.

    After the Gulf War, discussions arose about the efficacy of the Patriot defense system (cf. Siegel 2004), and the software failure was criticized for being just a scapegoat for the army to cover up the malperformance of the Patriot system. This discussion however does not subvert the example. Even if the critics are right, we may consider the Patriot software failure to be an instructive imaginary case. See for a more detailed account Diekmann and Zwart (2013).

  13. 13.

    See also the ABET (1988) definition of design, which states “Among the fundamental elements of the design process are the establishment of objectives and criteria, synthesis, analysis, construction, testing and evaluation,” or the ISO (2006) section 5.

  14. 14.

    For recent developments in participatory design, see the special issue of Design Issues on the subject, volume 28, Number 3, Summer 2012, or the proceedings of the biennial Participatory Design Conference (PDC), which has had its 12th meeting in 2012.

  15. 15.

    Woodhouse and Patton (2004, p. 7) ask a similar question within the STS context of design: “Who shall participate in making decisions about new design initiatives (and in revising existing activities)?”

  16. 16.

    Finding out how to identify the relevant stakeholders and their views, modelers could also explore the way system and software engineers carry out requirement analysis, which covers among other things stakeholder identification and joint requirement development sessions.

  17. 17.

    These are two ends that also inspired the cautious admitters’ position of Le Menestrel and Van Wassenhove discussed in the section “Current Ideas About the Value-Ladenness of Models.”

Notes

Acknowledgment

This chapter draws on and elaborates Zwart et al. (2013) and Diekmann Zwart (2013). Moreover, it presents part of Van de Poel (2013) as starting point for the operationalization of societal values in engineering design. Finally, the author wants to thank Sven Diekmann and the editors of the present volume for their comments on the outline and contents of this chapter.

References

  1. ABET, Accreditation Board for Engineering and Technology, Inc (1988) Annual report for the year ending September 30, 1998, New YorkGoogle Scholar
  2. Adams D (2009) Mostly harmless. Pan Macmillan, LondonGoogle Scholar
  3. Barlas Y (1996) Formal aspects of model validity and validation in system dynamics. Syst Dyn Rev 12(3):183–210. doi:10.1002/(SICI)1099-1727(199623)12:3<183::AID-SDR103>3.0.CO;2-4CrossRefGoogle Scholar
  4. Baumgartner F, Baun TM (2005) Engineering documentation. In: Whitaker JC (ed) The electronics handbook, 2nd edn. CRC Press, Boca RatonGoogle Scholar
  5. Bijker WE (1995) Democratisering van de technologische cultuur. Schrijen-Lippertz, VoerendaalGoogle Scholar
  6. Blair M, Obenski S, Bridickas P (1992) GAO/IMTEC-92-26 Patriot missile software problem. Retrieved from http://www.fas.org/spp/starwars/gao/im92026.htm
  7. Bucciarelli LL (1994) Designing engineers. MIT Press, Cambridge, LondonGoogle Scholar
  8. Bucciarelli L, Kroes P (2014) Values in engineering. In: Soler L, Zwart S, Lynch M, Israel-Jost V (eds) Science after the practice turn in the philosophy, history, and social studies of science. Routledge, NewYork/Londen, pp 188–199Google Scholar
  9. Buchanan R (1992) Wicked problems in design thinking. Des Issues 8(2):5–21. doi:10.2307/1511637CrossRefGoogle Scholar
  10. Cranor CF (1990) Some moral issues in risk assessment. Ethics 101(1):123–143. doi:10.2307/2381895CrossRefGoogle Scholar
  11. Cross N (2008) Engineering design methods: strategies for product design. Wiley, Chichester/HobokenGoogle Scholar
  12. Diekmann S, Zwart SD (2013) Modeling for fairness: a rawlsian approach. Stud Hist Philos Sci A 46:46–53CrossRefGoogle Scholar
  13. Douglas H (2000) Inductive risk and values in science. Philos Sci 67(4):559–579CrossRefGoogle Scholar
  14. Douglas H (2007) Rejecting the ideal of value free science. In: Kincaid H et al (ed) Value-free science?, vol 1. Oxford University Press, New York, pp 120–141Google Scholar
  15. Downey GL (1998) The machine in me. An anthropologist sits among computer engineers. Routledge, New York/LondonGoogle Scholar
  16. Feenberg A et al (2006) “Replies to critics”, democratizing technology: Andrew Feenberg’s critical theory of technology. In: Veak TJ (ed) Democratizing technology: building on Andrew Feenberg’s critical theory of technology. State University of New York Press, Albany, pp 175–210Google Scholar
  17. Fleischmann KR, Wallace WA (2005) A covenant with transparency: opening the black box of models. Commun ACM 48(5):93–97. doi:10.1145/1060710.1060715CrossRefGoogle Scholar
  18. Fleischmann KR, Wallace WA (2009) Ensuring transparency in computational modeling. Commun ACM 52(3):131–134. doi:10.1145/1467247.1467278CrossRefGoogle Scholar
  19. Frankena WK (1973) Ethics. Prentice-Hall, Englewood CliffsGoogle Scholar
  20. Frigg R, Hartmann S (2012) Models in science. In: Zalta EN (ed) The stanford encyclopedia of philosophy (Fall 2012 edition). The Metaphysics Research Lab Stanford, CA 94305-4115 Stanford. http://plato.stanford.edu/archives/fall2012/entries/models-science/
  21. Gelfert A (2009) Rigorous results, cross-model justification, and the transfer of empirical warrant: the case of many-body models in physics. Synthese 169(3):497–519. doi:10.1007/s11229-008-9431-6CrossRefGoogle Scholar
  22. Geraci A (1991) IEEE Standard Computer Dictionary: Compilation of IEEE Standard Computer Glossaries. (Contributions by F. Katki, L. McMonegal, B. Meyer, J. Lane, P. Wilson, J. Radatz, … F. Springsteel). Piscataway, NJ, USA: IEEE PressGoogle Scholar
  23. Gibson JJ (1986) The ecological approach to visual perception. Lawrence Erlbaum, HillsdaleGoogle Scholar
  24. Haimes YY (2005) Risk modeling, assessment, and management, vol 40. Wiley, HobokenGoogle Scholar
  25. Hubka V, Eder WE (1988) Theory of technical systems; a total concept theory for engineering design. Springer, BerlinCrossRefGoogle Scholar
  26. ISO (2006) ISO 11442:2006(E) Technical product documentation – document management. International Organization for Standardization, GenevaGoogle Scholar
  27. Jenkins DG, McCauley LA (2006) GIS, SINKS, FILL, and disappearing wetlands: unintended consequences in algorithm development and use. In: Proceedings of the 2006 ACM symposium on applied computing. ACM, New York, pp 277–282. doi:10.1145/1141277.1141342Google Scholar
  28. Jones JC (1992) Design methods. Wiley, New YorkGoogle Scholar
  29. Kijowski DJ, Dankowicz H, & Loui MC (2013) Observations on the Responsible Development and Use of Computational Models and Simulations. Science and Engineering Ethics, 19(1):63–81. doi:10.1007/s11948-011-9291-1CrossRefGoogle Scholar
  30. Klaasen I (2005) Modelling reality. In: Jong TMD, Voordt VD (eds) Ways to study and research urban, architectural and technical design. IOS Press/Delft University Press, Delft, pp 181–188Google Scholar
  31. Kleijnen JPC (2001) Ethical issues in modeling: some reflections. Eur J Oper Res 130(1):223–230. doi:10.1016/S0377-2217(00)00024-2CrossRefGoogle Scholar
  32. Le Menestrel M, Van Wassenhove LN (2004) Ethics outside, within, or beyond OR models? Eur J Oper Res 153(2):477–484. doi:10.1016/S0377-2217(03)00168-1CrossRefGoogle Scholar
  33. Mannan S (2005) Lee’s loss prevention in the process industries: hazard identification, assessment, and control. Elsevier Butterworth-Heinemann, BurlingtonGoogle Scholar
  34. McNelis PD (1994) Rhetoric and rigor in macroeconomic models. In: Wallace WA (ed) Ethics in modeling. Pergamon, Oxford/Tarrytown, pp 75–102Google Scholar
  35. Merton RK (1936) The unanticipated consequences of purposive social action. Am Sociol Rev 1(6):894–904. doi:10.2307/2084615CrossRefGoogle Scholar
  36. Morgan MS, Morrison M (1999) Models as mediators: perspectives on natural and social science. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  37. Pahl G, Beitz W (1984) Engineering design; a systematic approach. Design Council, LondonGoogle Scholar
  38. Pugh S (1990) Total design; integrated methods for successful product engineering. Addison Wesley, WokinghamGoogle Scholar
  39. Rittel HWJ, Webber MM (1973) Dilemmas in a general theory of planning. Policy Sci 4(2):155–169. doi:10.1007/BF01405730CrossRefGoogle Scholar
  40. Roozenburg NFM, Eekels J (1995) Product design: fundamentals and methods. Wiley, Chichester/New YorkGoogle Scholar
  41. Rykiel EJ (1996) Testing ecological models: the meaning of validation. Ecol Model 90(3):229–244. doi:10.1016/0304-3800(95)00152-2CrossRefGoogle Scholar
  42. Sargent RG (2005) Verification and validation of simulation models. In: Proceedings of the 37th conference on winter simulation, Orlando pp 130–143Google Scholar
  43. Schuler D, Namioka A (1993) Participatory design: principles and practices. Lawrence Erlbaum, HillsdaleGoogle Scholar
  44. Selby RG, Vecchio FJ, Collins MP (1997) The failure of an offshore platform. Concrete Int 19(8):28–35Google Scholar
  45. Shelley C (2011) Fairness in technological design. Sci Eng Ethics 18(4):663–680. doi:10.1007/s11948-011-9259-1CrossRefGoogle Scholar
  46. Shruti K, Loui M (2008) Ethical issues in computational modeling and simulation. CincinnatiGoogle Scholar
  47. Siegel, Adam B (2004) “Honest Performance Analysis: a not-always met requirement”. Defense Acquisition Review Journal. Defense Acquisition University Press. January–April, p.101–106Google Scholar
  48. Simon HA (1973) The structure of ill structured problems. Artif Intell 4(3–4):181–201. doi:10.1016/0004-3702(73)90011-8CrossRefGoogle Scholar
  49. van de Poel IR (2009) Values in engineering design. In: Meijers AA (ed) Philosophy of technology and engineering sciences, vol 9. Elsevier/North Holland, Amsterdam/London/Boston, pp 973–1006CrossRefGoogle Scholar
  50. Van de Poel I (2011) The relation between forward-looking and backward-looking responsibility. In: Vincent NA, van de Poel I, Hoven J (eds) Moral responsibility. Springer Netherlands, Dordrecht, pp 37–52CrossRefGoogle Scholar
  51. van de Poel IR (2013) Translating values into design requirements. In: Michelfelder DP, McCarthy N, Goldberg DE (eds) Philosophy and engineering: reflections on practice, principles and process. Springer, Dordrecht/Netherlands, pp 253–266CrossRefGoogle Scholar
  52. van de Poel IR, Royakkers L (2011) Ethics, technology, and engineering: an introduction. Wiley-Blackwell, MaldenGoogle Scholar
  53. Vinck D (ed) (2003) Everyday engineering. Ethnography of design and innovation. MIT Press, CambridgeGoogle Scholar
  54. Walker WE (1994) Responsible policy making. In: Wallace WA (ed) Ethics in modeling. Pergamon, Oxford/Tarrytown, pp 226–241Google Scholar
  55. Walker WE (2009) Does the best practice of rational-style model-based policy analysis already include ethical considerations? Omega 37(6):1051–1062. doi:10.1016/j.omega.2008.12.006CrossRefGoogle Scholar
  56. Whitaker JC, Mancini RK (2012) Technical documentation and process. CRC Press, Boca RatonGoogle Scholar
  57. Woodhouse E, Patton JW (2004) Design by society: science and technology studies and the social shaping of design1. Des Issues 20(3):1–12. doi:10.1162/0747936041423262CrossRefGoogle Scholar
  58. Zeigler BP, Praehofer H, Kim TG (2000) Theory of modeling and simulation, 2nd edn. Academic, San DiegoGoogle Scholar
  59. Zwart SD, Jacobs J, van de Poel I (2013) Values in engineering models: social ramifications of modeling in engineering design. Eng Stud 5(2):93–116CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.TU EindhovenEindhovenNetherlands

Personalised recommendations