Advertisement

Engineering Research and Ethics

  • Michael DavisEmail author
Living reference work entry

Abstract

Engineering research takes place in at least four domains: the laboratory, the pilot, digital models, and “the field.” In the laboratory, engineering research most resembles research in physics, chemistry, or biology. Issues concerning accuracy, truthfulness, crediting, and the like are much the same in engineering as in the sciences. The chief distinctive ethical issue in engineering research in the lab is that the research should seek to improve the material condition of humanity, not just seek knowledge for its own sake. There is no “pure engineering.” Much the same is true of research using digital models. In research done with “pilot projects,” however, the ethical issues most resemble those of medicine when testing drugs for safety or effectiveness. So, for example, should a pilot project begin to threaten the public welfare, it would have to be ended even though an important opportunity to learn would be lost. In the field, the ethical issues in engineering research most resemble those in public health. For example, engineers should keep good records of complaints about their products; have procedures for quickly identifying threats to the public health, safety, or welfare; and have procedures in place for responding appropriately. Research in engineering is continuous with the practice of engineering.

Keywords

Human welfare Laboratory Pilot Models Field Ethics 

Introduction

Engineering research may occur in a laboratory, a pilot project, a digital model, or “the field.” Wherever it occurs, engineering research may be associated with a university (or other academic institution), a government agency (such as Argon National Laboratories), a business (such as Boeing or Morton Salt), or an independent for-profit or not-for-profit laboratory (such Underwriters Laboratories). Most engineering research goes on within large organizations because only they can afford most engineering research.

Laboratories, pilot projects, and digital models are relatively controlled environments. “The field,” in contrast, is relatively uncontrolled, the setting for which a specific product is designed. For example, all BMW diesel engines now in service but manufactured between 2012 and 2018 could be the field for a certain muffler refinement.

Engineering research typically serves one or more “stakeholders,” for example, an employer, client, funding agency (typically a government or private foundation), employer’s customer, end user, or some combination of these. The one stakeholder that engineers are always supposed to serve is “the public,” that is, those people whose lack of knowledge, skill, time for deliberation, or other resources rendering them more or less vulnerable to the power an engineer wields on behalf of other stakeholders. (Davis (1991) Since the 1970s, engineers have come to consider the public’s health, safety, and welfare “paramount.”

Engineering research can produce academic articles, technical standards, government reports, patents, designs for a product, or other technical documents. Engineering research can also produce physical artifacts, physical processes, software, copyrights, trade secrets, or the like. Much that engineering research produces can be marketed; much that cannot, such as safety standards or academic articles, may still be useful in one way or another.

The line between engineering research and the rest of engineering is not sharp. Let us draw the line this way: If engineers think they know the outcome of the engineering they are doing, it is not engineering research strictly speaking. If the outcome later teaches them that they did not know what they thought they knew, the engineering they did is research of a sort – let us call it “retroactive research.” A significant amount of engineering research seems to be retroactive. Engineers learn from unexpected failure. If the engineers justifiably think they might learn something previously unknown from the engineering they are doing, even if the research is designed to confirm previous research, what they are doing is engineering research strictly speaking. Development, distribution, use, maintenance, and even disposal can be stages of engineering research.

Engineering research, whether research strictly speaking or only retroactive, can be one of at least two kinds. Some engineering research is (we might say) “personal,” that is, research into what engineering as a discipline already knows (or thinks it knows) but an individual engineer (the one doing the research) or a certain research team does not. This sort of research is typically carried out by searching the Internet, going to a library, “asking around,” or otherwise accessing the discipline’s resources. More important for our purposes is what we might call “disciplinary research,” that is, inquiries, seeking to add to what engineers cannot find in the discipline’s resources. Personal research cannot change “the state of the art”; disciplinary research can. By this definition, much ordinary engineering, such as quality control testing, is disciplinary research. No one knows whether the product being tested will pass until it has been tested. Thus, much engineering research, perhaps most, is not published (or even publishable). Though perhaps overly broad, this definition of “engineering research” has the advantage for our purposes of not excluding a priori activities this chapter should at least touch on.

Many of the ethical issues arising in engineering’s disciplinary research are the same as for scientific research, medical research, or both. For example, like scientists and medical researchers, engineers engaged in disciplinary research should avoid conflicts of interest, misusing data, and misrepresenting discoveries. They should also give credit where credit is due, get informed consent from human research subjects, accurately report their research expenses, and treat colleagues, employees, and students fairly. In what follows, I shall generally ignore ethical problems that engineering research shares with scientific or medical research, focusing on those problems (most) scientists and medical researchers do not have. The most difficult of these occur outside the laboratory, pilot, and digital model.

Some Differences Between Engineering Ethics and Research Ethics

While there now exists a substantial literature under the heading “engineering ethics” (or “professional responsibilities of engineers”), little of it concerns what is usually included under the heading “research ethics,” “scientific integrity,” “misconduct in research,” or the like. That is not because much of engineering ethics (the field of study) is not in substance the same as research ethics. Why then? There are at least three reasons which, either independently or together, might explain why much of engineering ethics, especially, the academic literature, is not included under the heading “research ethics.”

The first reason is an accident of history. Modern engineering ethics began as a field of professional ethics in the early 1970s, a response to a number of engineering scandals having little or nothing to do with engineering research. Among those founding scandals were the initial responses of the employer of three engineers who publicly reported life-threatening defects in equipment on the Bay Area Rapid Transit in 1972 and a reactor meltdown at the Three Mile Island Nuclear Generating Station in 1979. Modern engineering ethics began in the field.

Modern research ethics seems instead to have begun as a response to misconduct among Nazi physicians working in concentration camps and other nonacademic environments. The misconduct in question was (more or less) in medical laboratories. The founding documents of research ethics – the Nuremberg Code (1947) and the Declaration of Helsinki (1964) – were both concerned primarily with research involving human subjects. These documents seem to have had little effect on ethics education in medical schools in the United States, much less elsewhere, until a whistleblower revealed a field study of syphilis (1932–1972) that the US Public Health Service carried out through a clinic associated with the Tuskegee Institute. That study violated both the Nuremberg Code and the Declaration of Helsinki – and, indeed, seems clearly to have involved physicians and nurses putting the success of their medical research ahead of the health of their research subjects: men with treatable syphilis that the researchers did not treat even as they encouraged the research subjects to believe otherwise. Medical schools seem to have responded to the Tuskegee scandal (and several other similar scandals that followed soon after) by teaching the standards of the Helsinki Declaration – often as part of a general course in “medical ethics.” (I say “seems” in this paragraph because this standard story has been challenged as myth. See, for example, Stark 2012)

But only in the 1980s did a series of scandals involving fabrication, falsification, and plagiarism among academic researchers suggest the need to teach research ethics to scientists, not simply to physicians, and to include in what was taught more than the protection of human subjects. Though at first even these scandals seemed to be related to medicine or fields close to medicine, especially psychology, the scandals soon spread to fields as different as history and physics. (Davis 1990)

Oddly, to this day, engineering has not suffered such a scandal. That is not to say that individual engineers cannot recall examples of misconduct by academic engineers similar to those creating scandals outside engineering, only that the misconduct of the engineers did not occur under conditions, such as the prominence of the researcher involved, sufficient to create a scandal. I cannot say whether there is something about academic engineering research that protects it against such scandals or engineering has just been lucky. Nonetheless, courses in engineering ethics typically include topics such as accurate documentation, giving credit, and admitting error that scientists (and physicians) would recognize as issues of research ethics. (See, for example, the issues covered in some recent texts on engineering ethics: Harris et al. (2018), McGinn (2018), and Starrett and Lara (2017)) But courses in engineering ethics also include topics not typically part of courses in research ethics, such as sustainable development or relations with management. And even those topics in engineering ethics that could be included in a course in research ethics are generally presented as concerned with problems arising in “ordinary engineering,” not academic research.

Whatever the similarities between engineering ethics and research ethics, the first published connection that academics made between them may have been a mistake. One of the earliest texts in engineering ethics (1983) suggested thinking of engineering (all or at least most of engineering) as “social experimentation,” that is, as research on human subjects. The authors, Mike Martin (a philosopher) and Roland Schinzinger (an engineer), had a point, of course. Engineers typically create products without knowing all that will occur when those products reach the field. Engineers learn from field experience. Sometimes they learn quite a lot. For example, air-conditioners using a relatively inert gas instead of a much more corrosive coolant seemed a good idea, but the wide use of the relatively inert Freon (replacing an older ammonium coolant) did serious damage to the ozone layer defending the Earth against the sun’s ultraviolet radiation. By the time the cause of the ozone layer’s decay was discovered and the use of Freon curtailed, a significant number of people around the world had developed skin cancer that they probably would not have developed otherwise. From this undoubted fact (the tendency of even ordinary engineering to have substantial unpredictable side effects on people), Martin and Schinzinger drew the conclusion that engineering should be conducted on the model of research on human subjects, that is, engineers should seek the informed consent of those on whom the engineers are “experimenting” (those whom the engineers’ engineering might harm). (Martin and Schinzinger 1983. They have continued to use that analogy through all later editions.)

The Martin-Schinzinger argument for informed consent is, of course, an argument from analogy. Like many arguments from analogy where the similarities claimed actually exist, their argument cannot be flatly refuted. Critics can only point out enough differences between the two sides of the analogy for a reasonable person to conclude that the analogy is too weak for the purpose to which it is being put. The differences that might lead a reasonable person to conclude that the analogy is too weak might not be enough to bring another to the same conclusion. Reasonable people may reasonably disagree about matters of degree. So, what differences weaken the analogy? Why might the Martin-Schinzinger conclusion be a mistake?

An experiment is a controlled study, whether carried out in a lab, a pilot, a digital model, or the field. An experiment typically has a definite start and finish. It has a relatively well-defined set of human subjects (if it involves human subjects at all). It has a relatively precise hypothesis to test. And the risks of the study are relatively well-known and easy to explain to research subjects, making “informed consent” relatively easy to obtain. Think of the controls typically imposed even on the field testing of a new medicine or new medical device. Such research is experimentation strictly speaking. But much engineering research “in the field” is different. While its start is relatively easy to determine (the date when a product is first released into the field), the date when the experiment ends cannot be so easily determined. Indeed, one of the objects of field research may be to determine when there is so little left of a certain engineering product, such as a certain brand of plastic, that the research may end. Much field research simply consists in looking for unexpected problems (much as public health workers look for evidence of the outbreak of disease). There is no precise hypothesis. Hence, the “full disclosure” necessary for proper consent is not possible. As with Freon, the engineers may be ignorant of both the important harm a product may cause and the people it may affect, until well after the product has been released into the field. Who would have imagined that air-conditioners might leak enough Freon to cause skin cancer worldwide – even among people hundreds of miles from any air-conditioner? Much field research is not “experimentation” because it lacks the controls typical of experimentation. It is, rather, a kind of uncontrolled observation, a search for some of life’s nasty surprises. When the field research involves an engineering product, the surprises can be quite large. Indeed (as with Freon), “the research subjects” can include everyone alive today and an unknown number of generations to come. Describing ordinary engineering as “social experimentation” makes ordinary engineering sound considerably safer than it is.

That may be one reason that engineering ethics has developed more or less independently of research ethics. Research ethics is mostly about controlled studies, especially experiments (strictly speaking). Most of engineering ethics is about engineering in the field, where there is much less control. The second reason that engineering ethics may have developed more or less independently of research ethics is that engineering research is typically taught in institutions separate from those in which scientific or medical research is taught. Even in a large university that teaches science, medicine, and engineering, the engineers will typically have their own school with its own faculty (just as medicine does). The teaching of engineering ethics can go on independently of any course in research ethics. And even when engineering students are forced to take a general course in research ethics, what is taught may seem irrelevant to them. The typical course in research ethics will be designed for scientific or medical research, not for engineering research. So, for example, the cases of misconduct offered for discussion will typically include only the misconduct of scientists or physicians (even when the issues could as easily arise in engineering research as in scientific or medical research). Much the same is true of the literature of research ethics. (For a recent exception, see Sunderland and Nayak 2015)

A third possible reason that engineering ethics has remained separate from research ethics is that engineering research has largely gone on independently of medical research and, to a lesser degree, of scientific research. There are exceptions, of course. Nuclear physics has included engineers in its research teams since the Manhattan project. More recently, so many engineers have been involved in the projects of “big physics” that the demand for engineers trained for such involvement has generated a new field of engineering, “engineering physics” (or “engineering science”). More recently, something similar has happened in medicine as medicine has come to depend on mechanical or electronic devices, especially those devices requiring engineers to be present along with physicians during testing or operation. It is not surprising, then, that there is now a new field, “biomedical engineering,” for this sort of engineering.

Nonetheless, there is a difference between the developing relationship of engineering to science and of engineering to medicine. Big physics remains a small part of science. Most science remains “small science” in which there is little for engineers to do. In medicine, however, many fields of engineering beside biomedical engineering now engage in research in cooperation with physicians. Some of the research concerns medical devices. For example, several chemical engineers I know work on micro-pumps to place under the skin to deliver medications. Some electrical engineers I know are working on an “artificial eye” (a small camera connected directly to the brain’s visual cortex). And so on. But not all the relationships developing between engineering and medicine concern medical devices. Some of the developments concern medical procedures. For example, a mechanical engineer I know digitally models the spine of individual patients to help a surgeon decide where to place “pads” to repair the spine. Though not a biomedical engineer, he actually works with the surgeon’s patients, for example, touching a patient as part of taking measurements of damaged vertebrae. Yet, the code of ethics of the Biomedical Engineering Society is the only code of engineering ethics I know of that has anything to say about patients. That code makes the welfare of the patient, not the public, primary. (Biomedical Engineering Society, Code of Ethics 2004) It is, then, inconsistent with most engineering codes, warning of problems that engineers engaged in one of the new fields of medical research may have to sort out. Medicine’s response to a similar problem was to separate public health (which does not have patients) from ordinary medicine (which does). (Compare Public Health Leadership Association, Principles of the Ethical Practice of Public Health (2002) http://ethics.iit.edu/ecodes/node/4734 (accessed May 16, 2018) with the American Medical Association, Code of Medical Ethics, https://www.ama-assn.org/delivering-care/ama-code-medical-ethics(accessed May 16, 2018).) Perhaps engineering will have to do something similar. Most engineering, or at least most engineering research, is closer to public health research than to ordinary medical research. The public’s health, safety, and welfare are paramount, not that of any individual stakeholder, not even that of the patient.

The Laboratory

Suppose an engineer, or more likely a team of engineers, undertakes research concerning a concrete to weigh less per cubic foot than any concrete now available. Such research may take place in an academic laboratory, in the laboratory of a business manufacturing concrete, or in another sort of laboratory. Whatever the site of such initial research, the engineers in question should not undertake the research out of mere curiosity, a desire to expand human knowledge, or even the need to earn their keep. They should be able to explain how such research might benefit humanity overall. For example, they should be able to say: All else equal, the lighter the concrete in a high-rise building, the less expensive the building. Savings from lighter concrete would then be available for other beneficial uses, such as lower rent per square foot, making it easier to provide good housing for more people. Fundamental to the profession of engineering is a commitment to improving the material condition of humanity. (See, for example, the definition of “engineering” used by engineering’s accreditation agency (ABET): “the profession in which a knowledge of the mathematical and natural sciences gained by study, experience, and practice is applied with judgment to develop ways to utilize economically the materials and forces of nature for the benefit of mankind”, http://users.ece.utexas.edu/~holmes/Teaching/EE302/Slides/UnitOne/sld002.htm (accessed May 21, 2018).) There is no “pure engineering.” All engineering is “applied,” something that differentiates engineers from “pure scientists” (or, at least, those who claim to be “pure scientists”). What differentiates engineers from all scientists, if anything does, is that engineers are not primarily concerned with understanding nature or society but with improving it. In this respect, engineering resembles medicine rather than science. Neither engineering nor medicine values knowledge as such (though individual engineers or physicians may).

Having satisfied themselves of the likely practical utility of their research, the engineers would typically study the relevant literature for possible solutions to the problem posed. This is personal research, an attempt to determine the state of the art, not to advance it. Suppose the personal research teaches that adding finely ground oil-palm shells to sand, cement, and other typical ingredients of concrete has been shown to produce a lighter concrete. But suppose too that, since 1984, many studies have replaced conventional crushed granite aggregate with some sort of oil-palm-shell aggregate. Among these studies are some recent ones reporting that when concrete’s coarse aggregates were totally replaced with oil-palm-shell aggregates, the concrete has lost much of its resistance to heat. These studies should trouble engineers for two reasons. First, in high-rise buildings, concrete is the primary defense against fire. Concrete protects the steel superstructure that a fire’s extreme heat would otherwise damage. Concrete that quickly disintegrates during a fire would make high-rise buildings more likely to collapse during a fire. Second, the experience of engineers is that what fire does quickly weather will do slowly. Oil-palm-shell concrete might, then, unduly reduce the durability of a building – forcing repair of weathering surfaces more often than ordinary concrete does – and thereby increase the cost of maintenance. All else equal, neither reduced durability nor increased lifetime cost is a consequence engineers can accept. (Yahaghi and Sorooshian 2018)

But this consequence of using oil-palm shell in concrete, though troubling, is not decisive. It merely poses a problem typical of engineering, optimization: can the concrete be made light enough to be useful, all else equal, without reducing its fire resistance or weather resistance so much that the concrete is too expensive over its lifetime, not safe enough, or otherwise unsatisfactory? That is a question appropriate for engineering’s disciplinary research. It is a question that seems well worth asking – at least if the researchers have some novel idea for an answer. But there are also other questions that the engineers should ask before the research goes any further. Among the three most obvious are:
  • Is there enough palm-oil shell available for a successful mix to meet demand? If not, can the supply of palm-oil shell be increased without encouraging large-scale substitution of palm plantations for natural forests, forcing the price of palm-oil shell so high that other useful products using palm-oil shell are driven from the market, or otherwise producing substantial undesirable side effects? We might call this “the Freon question.”

  • Can those who use the new concrete to build be relied on to follow the directions that will have to accompany it? If not, then the new concrete may not be as good in the field as in the lab or pilot. This is the “scaling-up question.”

  • Can the new concrete be marketed successfully? Or is the improvement it makes too small for it to be attractive in a crowded market? This is the “demand question.”

Little engineering research, even laboratory research, is done without someone other than the engineers paying for it. For academic engineers, the source of funding is typically a government agency, private foundation, or manufacturer. For nonacademic engineers, the source of funds is typically their employer (though sometimes the employer is just a conduit for money from another business or a government). Those funding the research, whatever they are, should seek at least rough answers to all four of these questions before funding the research. If they do not, the engineers should discuss the four questions with the funders anyway – for at least two reasons. First, the funders may have information that the engineers do not. Discussing such questions with funders is a chance for the engineers to gather that information. Second, engineers should try to avoid wasting social resources. So, for example, if the answer to any of the four questions above is simply “no,” the engineers should give up the research even if the funder is willing to provide the funds.

Another question the engineers should ask, one that they should ask both of themselves and other stakeholders all through their research, including the pilot and field testing, is: What other undesirable side effects should we be looking for?

Ideally, the answer to all these questions, but especially this last, will depend on reliable information, whether engineering knowledge or knowledge of another stakeholder. To ensure such reliable information, the research team should include representatives of all stakeholders from marketing and manufacturing to end users and the public. In practice, all the relevant information that the engineers involved in a certain piece of laboratory research may have is their own “engineering judgment,” a feel for the new product that long experience with similar products has given them. While engineering judgment is not very reliable, it is always better than nothing. If engineers ask the right questions, they may get the right answers, even if they are only asking themselves. Often gathering information from other stakeholders is – or at least will seem to be – too expensive, especially in an early stage of laboratory research and especially in what seems to everyone involved, including the funder, to be “ordinary engineering.”

The Pilot

A pilot is an experiment on a scale beyond what a typical laboratory can manage but still less uncontrolled than field testing. For example, testing a self-driving car on a track in the basement of a large building is probably still laboratory research, but testing the same car on an open-air track several miles long is a pilot. Testing has been “scaled up” enough to approach conditions in “the real world” while still maintaining tight control over relevant variables. But testing the same car on streets of even a single city while the streets are in ordinary use counts as field testing. This set of distinctions is not very precise, but it should be good enough for our purpose – which is to identify a kind of research between the laboratory and the field. There can, of course, be several pilots for a single project, each closer to conditions in “the real world” than its predecessor.

Perhaps the most important problem of pilot design is to scale the pilot up enough to identify problems not obvious in laboratory tests, thus justifying the additional cost of the pilot while maintaining control of experimental variables. So, for example, testing a self-driving car on an outdoor track may allow the car to go faster than on an indoor track; allow it to encounter more varied combinations of turns, obstacles, and natural light; and so on. But pilot testing probably should not be done on days when there is rain, the track is icy, or there is a strong wind, unless there is some way to measure accurately these departures from the standard test conditions.

One important ethical problem in designing or conducting a pilot is making sure the product being tested does not escape into the field. So, for example, there should be no way for the self-driving car of our current example to turn left rather than right at a certain intersection on the track and enter an ordinary street with ordinary traffic. Part of what makes a pilot a pilot, whatever its scale, is enough control over the test conditions to protect the larger world from harm.

Another important ethical problem is explaining the limits of pilot testing to those relying on the engineers’ research. While pilot testing often identifies problems not evident in the lab, it often misses problems that “the real world” will reveal. Some of these problems will appear only in field testing simply because the field is so much larger than any pilot can be. What are statistically insignificant, or wholly unobservable, problems in a pilot may become statistically significant in the field. Other problems may appear because the reduced control of testing in the field allows the anarchy of the real world to enter the research. In the field, the supplies received may not be quite what they are supposed to be; those following the instructions on a package may misread them; the instructions may not take into account unusual combinations of temperature, precipitation, and wind; and so on. The field is the domain of Murphy’s Law; the pilot is not (or at least should not be).

Digital Models

Digital models seem to be useful to many engineers because they allow the engineers to test on their desk what they would otherwise have to test in a lab, a pilot, or the field or, because of the time, size, or cost of those tests, not test at all. So, for example, if the engineers working on a lighter concrete want to compare the overall lifetime cost of various mixtures of lighter concrete with the competing ordinary concrete, what should they do? They can get lifetime costs for competing concretes already in use from records that large buildings keep – or, at least, they can get lifetime costs for those concretes that have been in use so long that future costs seem predictable. The engineers probably cannot get similar information for their own lighter mixtures for at least three reasons. First, since their mixtures are new, information based on actual use is unavailable. Second, since constructing a high-rise building is expensive, cost alone may prohibit field testing. Third, the researchers probably will not have the time to do a lifetime study in the field. For example, the useful life of a high-rise building is typically many decades. Indeed, some of the earliest high-rise buildings (such as Chicago’s Monadnock Block) are still in use after more than a century. By the time field research on the new lighter concrete’s lifetime costs is complete, it is likely that the original researchers will have died and new products will have made the concrete mixtures under study obsolete.

Pilot testing might have all the same problems as field testing in this example. The pilot might require many years to settle the useful life of the new concrete. Laboratory testing could avoid such problems but only by making the test conditions less realistic, for example, exposing small parts of a high-rise building to temperatures, winds, and other artificial weather not likely, indeed, not possible, in the field. The experience of engineers is that laboratory testing often fails to identify problems that pilot or field testing identifies. That is why engineers do both pilot and field testing. So, while laboratory testing is better than nothing, it alone is generally not good enough when the public health, safety, or welfare is at stake – as it is when the subject of research is concrete to be used in high-rise buildings.

Here a digital model might seem to take the place of pilot or field testing. Lab testing would have to provide enough information to run the digital model. The model would then provide outcomes for as many years as desired, assuming whatever weather, maintenance, and misuse seems the outcomes of which the researcher thinks worth study. Of course, the outcomes could be no more reliable than the model itself and the information the lab tests and historical records provided. In general, that is not very reliable. Lab testing will generally miss some information that pilot or field testing would reveal in part because all models are incomplete, even when the relevant historical records are not. Further, the more information a model includes, the more complex it becomes; the more complex, the more errors in programing; the more errors in programing, the less reliable. Unless a model has been in wide use for many years, it is hard to know how reliable it is. And, even if it has been in wide use for a long time, the model may still have flaws that only further use will reveal. Probably, the best way for engineers to ensure the reliability of their model-based predictions is to use several models developed independently, accepting only those results all the available models, or at least most generate.

Engineers relying on several digital models will have to use their judgment, especially when different models give significantly different results. Engineers should take care to explain the limits of this sort of testing to those who must rely on it. Too many people are likely to accept without question the results of tests using even a single digital model – unless the researchers explain the limits of such testing. Explaining the limits of testing with digital models was a major problem for engineers (and computer scientists) during the 1980s debate concerning “the Star Wars” defense against strategic nuclear missiles.

The Field

All (disciplinary) research outside the controlled conditions of a laboratory, pilot, or digital model is field research. All else equal, field research is riskier than the corresponding research in a laboratory, pilot, or digital model. The difference in risk can be enormous. For example, the Chernobyl nuclear disaster (1986) began as a field test. Chernobyl’s reactors had three backup diesel generators. Each generator required 15 s to start up but took over a minute to attain the speed required to run one of the main coolant pumps. Chernobyl’s engineers judged this 1-min power gap unacceptable. Too much can happen in a nuclear reactor in a minute when the cooling system is not working. Desktop analysis indicated that one way to bridge the 1-min gap was to use the mechanical energy of the steam turbine and residual steam pressure to generate electricity to run the main coolant pumps, while the generator was reaching the correct RPM, frequency, and voltage. But, of course, the analysis had to be confirmed in the real world. The engineers had to work out and then successfully test a specific procedure for effectively employing residual momentum and steam pressure. Previous attempts – in 1982, 1984, and 1985 – had failed. The 1986 attempt was scheduled to take place at Reactor 4 during a maintenance shutdown. The test focused on refinements in the switching sequences of the electrical supplies for the reactor. The test began just after 1:23 am on April 26, 1986. The diesel generator started and sequentially picked up loads. The turbine generator supplied the power for the four main circulating pumps as it coasted down. The test was all but complete 40 s later. But, as the momentum of the turbine generator that powered the water pumps decreased, the water flow also decreased, producing more steam bubbles in the core. The production of steam reduced the ability of the coolant to absorb neutrons, increasing the reactor’s output of heat. The increased heat caused yet more water to become steam, further increasing heat. During almost the entire 40-s test, the automatic control system successfully counteracted this destructive feedback, inserting control rods into the reactor core to keep the temperature down. If conditions had been as planned, the test would almost certainly have been carried out safely. The Chernobyl disaster resulted from attempts to boost the reactor power – and, therefore, temperature – once the test had started (something inconsistent with approved procedure). The approved procedure called for Reactor 4’s power output to be gradually reduced to 700–1000 MW. The minimum level established in the procedure (700 MW) was achieved about an hour before the test began. However, because of the natural dampening effect of the core’s neutron absorber, reactor power continued to decrease. As the power dropped to approximately 500 MW during the test, one of the engineers conducting the test mistakenly inserted the control rods too far, nearly shutting down the reactor. Operators in the control room soon decided to restore the power and extracted the reactor control rods, but several minutes elapsed between the extraction and the time that the power output began to increase and stabilize at 160–200 MW. The extraction withdrew the majority of control rods to their upper limit, but the rapid reduction in power during the initial shutdown and subsequent operation at less than 200 MW led to increased dampening of the reactor core (because of accumulation of xenon-135, an unstable fission product of uranium that absorbs neutrons at a high rate). To counteract this unwanted high absorption, the operators withdrew additional control rods from the reactor core. Then, about the time the test ended, there was an emergency shutdown of the reactor. The shutdown started when someone in the control room – whether as an emergency measure, by mistake, or simply as a routine method of shutting down the reactor when the test ended – pressed the button of the reactor’s emergency protection system. Because of a flaw in the design of the graphite-tip control rods, the dampening rods displaced coolant before inserting neutron-absorbing material to slow the reaction. The emergency shutdown therefore briefly increased the reaction rate in the lower half of the core. A few seconds after the start of the emergency shutdown, there was a massive power increase, the core overheated, and seconds later this overheating produced the first explosion. Some of the fuel rods fractured, blocking the control rod columns and causing the control rods to become stuck at one-third insertion. Several more explosions followed, exposing the reactor’s graphite moderator to air, causing it to ignite. Since the reactor lacked a containment (a thick concrete shell protecting the outside world), the fire in the reactor sent a plume of highly radioactive smoke into the atmosphere, causing dangerous fallout over a huge area of the Ukraine – and, eventually, less dangerous fallout over much of the world. The effort to halt the nuclear contamination and avert a much greater disaster soon involved over 500,000 workers and cost an estimated 18 billion rubles, crippling the Soviet economy. (Davis 2012)

The Chernobyl disaster is an extreme example of the risks of field testing. What I have called “the anarchy of the real world” turned what must have seemed a relatively safe test of a small improvement into a major disaster with worldwide consequences. There are enough other examples of the anarchy of the real world transforming ordinary engineering into a disaster that one sociologist has coined a term for it, “normal accidents.” (Perrow 1984). An accident is a side effect of appropriate conduct, but one both undesired and unplanned, that could have been prevented had circumstances leading up to it been better managed. Accidents are “normal” in engineering because engineers routinely produce systems of immense power that are both novel and complex. The power makes the systems dangerous. The novelty makes experience an imperfect guide to the dangers. And complexity makes properly managing all of even the known dangers difficult. Sooner or later something must go wrong.

This section has so far focused on field testing, not just releasing the products of engineering into the field. There is a reason. Field testing is the safest way to release the products of engineering into the field, the Chernobyl disaster notwithstanding. The more common way to release the products of engineering into the field is to sell them to anyone who wants to buy (after laboratory, pilot, and perhaps digital modeling and field testing). The sale of Freon-filled air-conditioners is a good example of such release.

For engineers, the release of a product into the field does not end their responsibility. Engineers should track their products in use, keeping detailed (and accurate) records of complaints, returns, accidents, and so on. They should examine those records, looking for signs of trouble. Of course, this procedure would not have picked up a problem like that with Freon. For that, engineers would have to rely on scientists, activists, news media, and other “outsiders” to pick up effects that customers or end users are unlikely to notice. Sometimes the outsiders will be right about a problem, as they were about Freon. Sometimes they will be wrong, as they seem to have been about electromagnetic radiation from cellphones causing brain cancer. But once engineers are alerted to a possible problem, part of good engineering is keeping up with the relevant non-engineering literature. And even when the literature is not confirming a problem, engineers may need to do more than keep up with the relevant non-engineering literature. It is at least arguable that engineers should follow “the precautionary principle,” that is, not allow lack of scientific certainty about the existence of a threat to the public health, safety, or welfare to prevent them from undertaking research concerning how to protect the public from that (possibly nonexistent) threat. (Read and O’Riordan 2017)

Conclusion

This brief survey of the ethics of engineering research has emphasized three themes not found in research ethics generally. First, a probable contribution to the public health, safety, or welfare is required to justify any engineering research. Any research not at least arguably likely to improve the material condition of humanity is not engineering research – or at least not good engineering research. Engineers should avoid such “pure” research. Second, engineering research is inherently dangerous. Engineers often, sometimes without realizing it (as in the use of Freon), work with great powers and are therefore capable of doing great harm (as well as great good). They should be alert to the risks they may be taking. While engineers have developed many procedures for reducing the risks that their products may pose, most notably pilot testing and use of digital models, engineers should always to be looking for problems they have missed, even after their products have been released into the field. Some problems they may discover on their own, but discovering others may (and increasingly seem to) require the cooperation of scientists, public health departments, and the public. Third, much of engineering research is part of ordinary engineering, not academic research or even nonacademic research going on in a laboratory, pilot, or digital model. For that reason, it makes sense to treat the ethics of engineering research as part of engineering ethics (the ethics of a profession) as well as part of research ethics. While engineers may have much to learn from ordinary research ethics, engineering ethics is more than ordinary research ethics. (I presented a draft of this chapter to the Philosophy Colloquium, Illinois Institute of Technology, on October 12, 2018, benefiting from the comments of several of those present, especially Warren Schmaus.)

References

  1. Biomedical Engineering Society, Code of Ethics (2004) http://ethics.iit.edu/ecodes/node/3243. Accessed 16 May 2018
  2. Davis M (1990) The new world of research ethics: a preliminary map. Int J Appl Philos 5(Spring):1–10Google Scholar
  3. Davis M (1991) Thinking like an Engineer: The place of a code of ethics in the practice of a profession. Philos Public Aff 20(Spring):150–167Google Scholar
  4. Davis M (2012) Three nuclear disasters and a hurricane: some reflections on engineering ethics. J Applied Ethics and Philos 4:1–10Google Scholar
  5. Harris et al (2018) Engineering ethics: concepts and cases, 6th edn, Wadsworth. Independence, KYGoogle Scholar
  6. Martin MW, Schinzinger R (1983) Ethics in Engineering. McGraw-Hill, New York. pp 55–62Google Scholar
  7. McGinn R (2018) The ethical engineer: contemporary concepts and cases. Princeton University Press, PrincetonGoogle Scholar
  8. Perrow C (1984) Normal sccidents: living with high-risk technologies. Basic Books, New YorkGoogle Scholar
  9. Read R, O’Riordan T (2017) The Precautionary Principle Under Fire. Environ Sci Policy Sustain Dev. http://www.environmentmagazine.org/Archives/Back%20Issues/2017/September-October%202017/precautionary-principle-full.html. Accessed 19 May 2018
  10. Stark L (2012) Behind closed doors. University of Chicago Press, ChicagoGoogle Scholar
  11. Starrett SK, Lara AL (2017) Engineering ethics: real world case studies. ASCE Press. Reston, VAGoogle Scholar
  12. Sunderland ME, Nayak RU (2015) Reengineering Biomedical Translational Research with Engineering Ethics. Sci Eng Ethics 21:1019–1031CrossRefGoogle Scholar
  13. Yahaghi J, Sorooshian S (2018) The role of engineering ethics on concrete fire safety. Sci Eng Ethics 24:819–820CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Humanities DepartmentIllinois Institute of TechnologyChicagoUSA

Personalised recommendations