In this section, I will first propose the view that formal ethics poses an obstacle to the ideals of inclusive moral deliberation. Second, I will attempt to go beyond this principled analysis and show that formal ethics is promoting and perpetuating power asymmetries existing in current contexts associated with autonomous systems. Third, I will argue that formal ethics does not support the development of moral understanding and may not present a viable path forward to equalize asymmetric relationships of power.
Formal ethics as an obstacle to the ideals of inclusive moral deliberation
I will first illustrate ways formal approaches to ethics can present an obstacle to inclusive moral deliberation in principle. Recall that the ideal can be outlined as requiring that all parties affected (i) take part, (ii) have equal possibility to contribute, (iii) are capable of empathizing with each other, as well as that (iv) power differences should be neutralized and (v) participants should engage with each other in transparent ways, cf. [16, pp. 65–66].
There are at least two different facets to formal ethics as an obstacle for inclusion in moral debate relating to the above criteria: Formalisms in ethics (i)–(ii) may present a technical, as well as a social barrier to participation in moral discourse with an equal possibility to contribute, (iii), and often have as their stated goal to “remove sentimentalism” and may thus dissuade participants from empathizing. I will turn to (iv)–(v) in Sect. 4.2, where I will show that formal ethics may perpetuate power asymmetries that hinder scrutiny and transparent debate.
Technical and social barriers to participation
Concerning the criteria (i)–(ii), formal ethics presents a barrier to entering moral discourse to those without a proper command of the formalisms. Formalisms are challenging because they involve a significant knowledge of elements not directly associated with ethics, such as symbolisms or mathematics.
Requiring certain educational attainments may seem unavoidable, however. Proper command of the natural language is also a prerequisite for, e.g., moral discourse.Footnote 12 Asymmetry in discussion techniques also exists between those well-educated in the humanities and those that have problems articulating their contributions. Formalisms or a group membership associated with particular skills may also present social barriers to participation. The notion of ‘moral power’ as the “degree to which an actor, by virtue of his or her perceived moral stature, is able to persuade others”  is an important one to consider when striving for the ideals of inclusive moral deliberation. Consequently, I maintain that neither a lack of technical abilities in formal ethics nor of education in moral philosophy should present barriers to moral discourse. The point then is that, even as a trained ethicist, moral arguments need to be advanced with meekness. This means that moral discourse demands to allocate time and patience, including an articulation of refutations in ways the respective addressees will comprehend. It also requires an awareness of the power emerging from authority as tacitly perceived by laypeople.Footnote 13 Expertise in ethics, hence, should not be about moral authority but about being committed and able to deliver a transparent, intelligible, and well-balanced exposition of arguments. These should be typically conveyed in accessible natural language. Thus, even if formal ethics achieve to increase the rigor of some approach to ethics, see, e.g., [22, p. 180], inclusion in moral debate demands to re-translate the results into comprehensible and inclusive natural language.
In moral argument, formalisms rather seem to obscure the content of the proposed solution. A case in point is the formalism proposed by Bogosian, see . The parameters that enter his computational framework to balance competing moral theories would at best function as translating moral consensus into operational computer code. Contrary to this, Bogosian argues by utilizing his formal framework directly, claiming that “[t]his procedure, however ad hoc, seems to be the best possible way of approaching this particular case”.
Barriers to empathy
A common conception in formal approaches to ethics often seems that human emotions lead moral judgments astray. For instance, Gensler writes that “[e]ven when the logic is clear to us, we might be unable to be consistent because of psychological defects or strong emotions.” [22, p. 19] Anderson and Anderson claim that “[h]umans are prone to getting ‘carried away’ by their emotions to the point where they are incapable of following moral principles.”Footnote 14 
The notion of inclusive moral deliberation, which borrows from Habermas’ discourse ethics , stresses a need for enabling participants to engage in discourse committed to non-deceitful and open-minded exchange of arguments. Concerning criterion (iii) above, participants should also be “capable of empathizing”, meaning that they should both have the capacity and resources to develop empathic reactions, e.g., time and information. Thus, if the ideals of discourse ethics are acknowledged, this requires humans to take part in moral deliberation, as these capacities seem to be uniquely human (as opposed to artificial beings). Irrespective of humanity’s ability to design artificial moral agents that may eventually turn out to be in some way superior to humans in terms of their capabilities for moral deliberation or even their capacity for empathy, such artificial agents would face the same demands for letting those affected, including humans, participate inclusively.
However, current formal approaches to ethics of the top-down, rule-based type,Footnote 15 such as Gensler’s, at least, do not appear to facilitate human empathy or to allow for space and time in which participants may relate to each other. Visionary concepts, such as the artificial moral advisor due to Giubilini and Savulescu  e.g., instead promise to facilitate alignment of one’s behavior to one’s own moral standards even under the pressures of time. The focus, hence, seems to rather consist in increasing the time efficiency of moral deliberation, rather than its quality. Similarly, bottom-up approaches to computational ethics do not promote that participants may empathize with each other in inclusive moral debate as well. This is because bottom-up approaches do not formalize ethics itself, but rather focus on formalizing the process of arriving at a machine capable of producing moral evaluations from data. The purpose, even if the outputs of the machine act only as suggestions to humans, amounts to the automation of part of the deliberation process to increase the time efficiency of moral deliberation, even though algorithms may take data on moral judgments that involved emotions and empathizing as inputs. In light of the ideals of discourse ethics, such increases in efficiency need to be well-argued for, if the typical casualty of this kind of rationalization is the time and effort put into empathizing with others.
Modern technology indeed raises the issue that decisions with salient ethical implications can be—and thus potentially also have to be—takenin fractions of a second. Autonomous driving once again illustrates this. There is little controversy that self-driving cars are a technology that, in principle, should be developed, since the amount of accidents is typically assumed to decrease significantly with its adoption .Footnote 16 Still, the debate on how self-driving cars should operate in dilemmatic or even mundane situations remains far from settled . However, novel arguments on ascribing responsibilities bring in notions of subjective user experience , which seems challenging to do using existing formal approaches to ethics.
Still, facilitating empathy in moral discourse by formal approaches may be possible. Currently, however, even in approaches that are not directed mainly towards enabling computational implementations, empathy does not seem to play a significant role.
Approaches to automating moral deliberation, in turn, will need to accommodate implementing moral consensus. Perhaps, such a consensus can be implemented as a kind of law without regarding the empathic actions that occurred during the inclusive moral deliberation. In that case, this law probably needs to be quite specific. If, however, such a level of specificity should turn out infeasible, moral consensus needs to be formulated in ways that can be extrapolated to a range of circumstances. This, in turn, might require the formal ethics underlying the implementation to incorporate empathy, which some researchers are already working on, see, e.g., [59, 60].
Extrapolations of agreed-upon moral norms onto other contexts incur the danger of being opaque, effectively concealing the moral saliency in potentially highly relevant applications. This leads to aspects (iv–v), by which formal ethics may hinder a challenging of moral norms. Such a possibility to challenge within a framework of inclusive moral deliberation, however, is necessary for not impeding moral progress.
Formal ethics as a promoter of power asymmetries
Having shown that formal ethics pose obstacles to ideals of a Habermasian notion of inclusive moral deliberation in terms of impeding opportunity to equally participate and empathize in moral debate, I will continue to consider the further requirement to neutralize power differences and to engage in transparent discourse. I will frequently draw on the example of autonomous driving to highlight practical considerations when addressing power asymmetries. Autonomous driving is a salient use-case, as the debate on its ethical implications persists simultaneously to efforts of large multinational companies to enter markets as early as 2025 .
In Sect. 4.2.1, I will argue that there already is evidence for asymmetry in power relations regarding automation technology, as companies can dominate the introduction of ethical principles due to their largely unrivaled position to produce both the machines and the code for automation. Even before society-wide moral discourse could have settled on moral consensus and regulations, precedents are made that constitute a de facto moral decision. The mechanisms that guide success in these cases are not necessarily those of the most convincing moral argument but may instead reside in the economic benefit.
Second, in Sect. 4.2.2, I will advance the view that technological products employing formal ethics could result in a central ethical standard perpetuated by decentralized devices. The formalisms undergirding these standards will likely be limited in how they can reflect the variety of stances in moral discourse. Accordingly, these limitations should be explicitly addressed in inclusive moral deliberation.
Accumulation of power in the technology industry
The autonomous driving industry is already beginning to implement features without important ethical questions being settled. For instance, Elon Musk, writes that “it would […] be morally reprehensible to delay release [of the autopilot functionality] simply for fear of bad press or some mercantile calculation of legal liability” . Still, according to Mider, “Musk described Autopilot as a kind of rough draft, one that would gradually grow more versatile and reliable until true autonomy was achieved.”  Musk’s statement came only months before the first publicly reported fatality due to the self-driving system and only a few days before a software update that Tesla claimed would have prevented the accident . Accidents can never be averted with complete certainty. However, irrespective of whether one disagrees with Tesla’s trial and error approach, Tesla is an example of an entity that uses its power over both production and formalisms to pursue a particular ethical stance on autonomous driving that, in essence, amounts to simple utilitarian calculus. Instruments of power that the people can wield against such a unilateral perpetuation of moral views are limited. They mostly rest in the market or by taking the opportunity to vote for stricter regulation in upcoming elections. Both these instruments, however, are only of a post-hoc nature, as it is currently within the company’s power to set the pace by bypassing actual moral discourse.
For instance, Tesla has announced that from 2016, all cars will have the hardware necessary to implement autonomous driving . This announcement amounts to one of the most important technological proponents of autonomous driving unequivocally deciding that, in terms of sensory input and computational capacities, nothing more is needed to allow for ethically aligned autonomous driving. Note that this happened even before probably the first preliminary set of ethical guidelines on autonomous driving had been published by a national governmental agency, cf. . The hard- and software in their cars put Tesla in a position to determine the self-driving algorithms at will via “over-the-air” software updates , almost simultaneously affecting its current global fleet of more than a million cars . As Mider puts it, “Musk’s decision to put Autopilot in the hands of as many people as possible amounts to an enormous experiment, playing out on freeways all over the world.” 
While the capabilities of current autonomous driving systems may not yet have reached a level that allows certain controversial moral evaluations to be made both in dilemmatic or more mundane situations, likely, this may soon be the case. Despite this, moving forward in implementing autopilots without having the hard- and software capacity to address ethical quandaries rigorously can be viewed as a particular stance on the moral debate as well: detailed ethical considerations are considered of low importance, while the company controls the message that safety is key.
Even if one agrees in principle, the details of Tesla’s autopilot are a company secret, which excludes the public from scrutinizing an important moral matter that affects large parts of society [67, 68]. Instead of merely presenting a technical or social barrier, as argued in Sect. 4.1.1, here, the application of formalisms to ethically salient domains can even be protected as a trade secret .Footnote 17
In defense of the companies pursuing automation in morally salient domains, e.g., market pressures may be the reason why there is a lack of incentives to engage in inclusive moral discourse first. Perhaps inclusive discourse would not even require a full revelation of trade secrets. One might argue that competition drives innovation, and the earlier autonomous vehicles are marketable, the more lives and money will be saved due to reduced accidents. Instead, it might suffice to lay open qualitative explanations, e. g., on how an autonomous vehicle would act presented in a widely comprehensible form. However, it may only be hoped that the company with the best package in terms of value and ethical considerations would emerge victoriously. In light of my earlier attempts to contrast a liberal marketplace against an inclusive approach to moral deliberation in Sect. 3.1, this would at least be closer to the demands of the latter, as the details on the moral debate would be made more transparent. However, a free market would still lack just procedures that guarantee that moral matters can be deliberated inclusively without the interference of existing power structures. On the market, such power structures could emerge from the simple fact that capital is accumulated based on economic success with products that are quite remote from being associated with the moral matters to be deliberated. This route to economic power is exactly what Musk laid out in his agenda, cf. .
In these matters, it is not formal ethics per se that creates or perpetuates asymmetric relations of power, which are already strong in the technology industry [70, Ch. 1]. However, with formal ethics as discussed in Sect. 2.2 being highly reliant on the technology industry, and as a presumed facilitator of the trend of autonomous systems that offer sweeping promises of increased comfort and safety, it may be complicit. As I have tried to illustrate above, powerful technology companies venture into automation in morally salient domains without adhering to inclusive deliberation. By virtue of their access to the resources to implement ethical frameworks, they unilaterally push their moral agenda relatively uninhibited by challenges.
New ideas are needed to decouple economic power from the power to determine how moral issues will be addressed in the implementations of products. As long as this is not the case, advancing formal approaches to implement moral deliberation or even a mere alignment to moral norms, presents the danger that existing relationships of power (economic or otherwise) may dominate the processes that determine moral norms within society. For instance, impartial bodies would need to provide evidence that the qualitative explanations do indeed match the implementation based on formal ethical frameworks.
Implications of perpetuating moral norms through limited formalisms
As considered in the previous section, formal ethics are an instrument of power that can be exploited to perpetuate particular moral norms. In this section, I will consider ways in which formal ethics may perhaps unwittingly limit the breadth of the moral norms in practice. Instead of arguing against implementing formal ethical evaluations per se, I will argue for an explicit consideration of these effects and, ultimately, of the limits imposed on moral norms that autonomous systems may be aligned to. If a particular moral norm is tacitly favored by some formalism, even if it would qualify as “good” by some measure, centralized implementations may constitute an accumulation of power that contradicts the notion of inclusive moral deliberation.
To illustrate, consider that even though today's mobile devices are highly decentralized, software updates may change the behavior of millions of devices almost simultaneously, creating a centralized system in effect. Himmelreich denotes the emergence of significant patterns from the aggregation of identical programming code in millions of devices as the ‘challenge of scale’ . Analogously, identical ethical formalisms in many devices may replicate a particular moral view. They also replicate the limitations of the formalisms required to program the devices. These limitations may turn out to be unavoidable, as it is likely that no formal approach to ethics can encompass all types of ethical considerations. There may also be a mismatch between society’s moral consensus and what an ethical formalism may allow to implement.
Consider the case proposed by Himmelreich in which both an autonomous vehicle and a pedestrian are approaching a pedestrian crossing . Such situations may be imbued with high uncertainty. Humans intuitively judge whether they are being noticed, what others might be about to do, and the likelihood of particular events, such as coming across pedestrians at a specific daytime. In autonomous cars, both limitations in sensor hardware and the implementation of ethical evaluations restrict what can be taken into account. These limited capabilities of both formal ethics and hardware in an autonomous car should be accounted for in an inclusive moral deliberation of how an autonomous car should react in given situations, precisely because they will yield systematic implications in the aggregate.
Objections to this demand might again stem from the argument that advancing autonomous cars immediately will save numerous lives.Footnote 18 The benefit of saving many lives may ultimately outweigh certain reservations. However, by advancing without far-reaching testing and inclusive debate about the results, autonomous cars may also accidentally but systematically be putting particular traffic participants at risk that, previously, were under a far lower risk . In some moral views, this may amount to offsetting some lives against others, at least statistically . Despite the proclaimed benefits of automated driving, I maintain that this inherent trade-off would need to be decided based on inclusive deliberation, rather than ad hoc based solely on the current technical limits to ethical evaluations. Simulations, cf. , could shed some light on these issues and may be a valuable instrument to guide more inclusive discussions. In any case, the mere unproven prediction of fewer accidents should not be the sole enabler for obtaining a license to deciding on the formal ethical evaluations employed in autonomous driving.
Indeed, inclusive moral deliberation could avert the danger that people might stop caring. They might stop being interested in the circumstances under which their self-driving car will swerve, slow down cautiously, or overtake daringly. Even when each individual may not notice, in the aggregate, subtle changes in the self-driving cars’ behaviors will be significant. The danger lies in this significance eluding public scrutiny and inclusive moral debate if powerful entities, such as the companies may unilaterally decide on these matters. However, given current power structures, heavy reliance on mobility, and barriers for the general public to scrutinize the formal aspects of autonomous driving software in both ethical and non-ethical means, this may need to be enforced by means of regulation.
Power issues may be relevant in autonomous driving but are even more salient in the proposal of an artificial moral advisor . According to Giubilini and Savulescu  such a device would aid an individual to better comply with her or his own moral beliefs by presenting advice and relevant data. Even if the aim of such replicated devices is to facilitate alignment with the user’s individual moral norms, they may turn out to perpetuate the problems that they try to solve. This will be the focus of the next section.
Formal ethics as an obstacle to moral understanding
In the previous sections, I have attempted to argue that formal ethics is both an obstacle to inclusive moral deliberation as well as that it perpetuates existing power asymmetries. In the following, I would like to refute an objection that may be raised against working towards inclusive moral deliberation, especially under power neutrality from a practical point of view: It may be argued that power asymmetries and a lack of inclusion may be tolerated to promote progress. It may be objected that the ideal of inclusive moral deliberation may ultimately impede both technological as well as moral progressFootnote 19 because it slows down innovation. Progress lies at the core of the promises put forward by both formal and computational approaches to ethics, cf. Sect. 2. Formal approaches to ethics sit at the intersection of technology and moral philosophy. Hence, it may be suggested that similar to how formalisms have advanced technology, formal approaches to ethics will promote moral progress.
However, as argued in Sect. 3.2, there is a difference between technological and moral progress, which mainly resides in the latter requiring moral understanding to permeate society. Hence, in the following, I will first summarize the argument that formal ethics do not facilitate moral understanding because it presents an obstacle to inclusive moral deliberation. Second, I will caution against formal ethics that proclaim ways of improving human morality by elaborating on the notion that genuine moral progress on both an individual or a society-wide level demands that moral agents understand why something is morally right or wrong.
Formal ethics do not promote moral progress
In Sect. 4.1, I have argued that formal ethics, as a method proclaimed to add rigor to moral deliberation, presents both technical and social barriers to participating in moral deliberation. I have continued to argue that computational approaches to moral deliberation do not facilitate empathizing with those concerned in the deliberative process. Further, I have argued that formalisms present instruments of power. In Sect. 4.2, I have tried to elucidate asymmetric relationships of power as a practically relevant obstacle.
I believe these aspects together show that formal ethics is both not supportive of inclusive moral deliberation as well as presents actual instruments to perpetuate existing power asymmetries. Then, in turn, if the concepts elaborated on in Sect. 3 hold some truth, formal ethics have the potential to impede moral progress by preventing moral understanding as a consequence of a lack of inclusive moral deliberation.
Note that I intend inclusive moral deliberation to not only concern actual debate but also to consist in the aggregation of decentralized dialogic or even individual acts of deliberation. As per Hills , the continuous exercise of such deliberation—turned into new moral beliefs and practices—amounts to proper moral understanding. If moral deliberation is not inclusive, a significant portion of society will be bereft of developing moral understanding. As a consequence, this portion will not contribute to moral progress as defined by Moody-Adams  who considers moral progress to follow precisely from deepened moral understanding and turning this into practice.
It may be objected that it is sufficient that citizens simply abide by superior moral norms to result in progress and that this may be facilitated by devices. Thus, it may be claimed that this issue is simply definitional. A full defense of defining moral progress as something more than “better compliance (not mere conformity) with valid moral norms”  is beyond the scope of this article. However, it seems plausible that a proper definition should account for the plurality of moral views and the constant change in what we regard as valid moral norms. Thus, particularly in moral reform, it appears more essential to make moral progress reliant on our ability to argue for the validity of some moral norm in terms of the currently most defensible arguments. The notion of moral understanding as per Hills incorporates this ability to explain and exchange moral arguments. Moral progress may hence be thought of being facilitated mostly by decentralized dialogic and multilogic exchange that both proliferate and challenge moral norms, rather than by coercing individuals into behavior of a higher moral standard.
I concede that formal ethics can spark a fruitful academic debate, whose results may eventually enter public discourse as accessible popular science. However, actual moral progress requires the wide-spread adoption and understanding of potentially superior moral norms. In contrast, wide-spread technological understanding may largely remain related to its application, not its development and proliferation, to result in progress.
It may be an overly crude analogy, but one may hardly expect one’s sense of orientation to improve, while one is busy reading magazines in a self-driving vehicle. Likewise, I suggest that moral deliberation must be practiced both individually and in inclusive exchange with others. A good sense of direction may be a dispensable skill. Moral deliberation, however, is not.
Still, there are proposals to use formal approaches pursuing the laudable goal of attaining overall moral progress. If there is a use-case for formal ethics that can rebut these worries, it would be a proposal for systems that promote moral understanding. However, I would like to argue that artificial moral advisors, as proposed by, e.g., Giubilini and Savulescu  and Anderson , do not promote moral understanding and, hence, moral progress.
Artificial moral advisors do not promote moral progress
Next, I will argue that the notion of moral understanding contrasts with the idea of employing formal ethics to devise technological means for improving compliance of individuals concerning their own moral standards or society’s moral norms.
For instance, the idea of an artificial moral advisor is proposed to guide its user on the morally right action . The authors state that such a system “would assist us in many ethical choices where—because of our cognitive limitations—we are likely to fall short of our own moral standards.” In essence, a kind of artificial moral enhancement is proposed, which, according to Anderson , should lead to humans being more reflexive, less egoistic, and following good role models. Giubilini and Savulescu suggest to achieve this by improving or even substituting the function that human emotions play in supporting quick moral judgments by technological means. They envision a system that processes information and compiles it into moral advice more thoroughly than a human could. In doing so, the system should approach the ideal observer as per Firth , i.e., be “(1) omniscient with respect to non-ethical facts, (2) omnipercipient ([…] capable of […] using all the information simultaneously), (3) disinterested, (4) dispassionate, (5) consistent, and (6) normal in all other respects.” As the moral advice would be tailored to the moral beliefs of its user, such a system would not be subject to criticism based on the objection that a single moral theory is being promoted . However, I have tried to argue that dispassionateness may not be an ideal worth pursuing in every circumstance in Sect. 4.1.2. Empathy may hold notions about the situation of a particular individual that mere facts and data collection cannot cover, but which may still be relevant for moral decisions.
Regardless of this objection, a moral advisor would need formal tools both for implementing the data collection as well as proposing its ethical evaluation in alignment with the user’s moral values. Technical challenges can plausibly call into question the potential for success of arriving at a moral advisor that truly fulfills the criteria of an ideal observer. For instance, how can it be ascertained whether all data relevant for an ethical evaluation has been acquired? Both hard- and software constraints will restrict data acquisition and evaluation, meaning that it is required to prioritize the items to consider. However, like Giubilini and Savulescu [20, p. 170] one may object that humans are “suboptimal information processors”, too. I concur, but at the very least, humans may question which information is relevant and which is not from time to time. Posing and answering such questions may, in fact, be a highly relevant aspect in forming moral understanding. A formalized system would be much more rigid and encourage to stop questioning the origins and relevancy of the data used for ethical evaluation. In addition, the data collection processes may be tacitly determined by the entity that supplies the moral advisors, which is likely to have a non-negligible effect on the moral evaluations. Even if single evaluations may be relatively robust, biases may appear in the aggregate. A significant asymmetry in relationships of power ensues. I have tried to illustrate similar tendencies in Sect. 4.2, which can already be observed with the autonomous driving industry today. It stands to reason that the potential for abusing such power is even more pronounced in general everyday moral advice than in self-driving cars.
However, my main point in this section is that such technological means powered by formal ethics will not contribute to moral progress as defined, e.g., in Sect. 3.2. In this conception, moral progress follows from moral understanding, i.e., an active attempt to develop a comprehension of moral reasons translated into moral practice. I maintain that formal tools for programming devices that deliver moral advice would prevent such an active formation of understanding.
Still, moral advisors may be proposed to shortcut only everyday decisions, especially those concerning which individuals may be said to have, or at least may feel to have, reached a high level of moral understanding already. However, as moral deliberation is not only a solitary but also a collaborative endeavor, such advisors do not facilitate exchange. Instead, they are aimed at automation for saving on time. In a few instances, people will still exchange about the moral advice they followed, scrutinize, and perhaps challenge it. However, a system, as proposed by Giubilini and Savulescu , is designed to counteract our purportedly fallible intuitions and emotions. The system expects us to follow suit unquestioningly because we “do not have the time and the mental resources to gather and process all the information” [20, p. 170]. Arguably, moral understanding would not ensue. In light of my proposition that moral understanding is paramount to moral progress in Sect. 3.2, Giubilini's and Savulescu's proposal of an artificial moral advisor would then be likely to fail their own proclaimed goal of promoting moral progress.
In summary, I suggest that we should be very cautious about employing formal ethics as a potentially barrier-laden, opaque, and, ultimately, non-inclusive means to arrive at “more rigorous” moral judgments, regardless of whether this may be in an automated or expert-guided way.