1 Introduction

Artificial intelligence (AI) systems are being adopted in a growing set of practical contexts. From industry, healthcare, and households to warfare, finance, and law enforcement—just to name a few—AI technologies are becoming increasingly embedded into the fabric of individual and social existence (Dubber et al. 2020; Crawford 2021). Respectively, the scope of human autonomous decision-making and agency is inevitably affected and morphs into new configurations. The delegation of tasks to AI systems impacts on human autonomy in intricate ways, reshaping its contours, remodulating its characters, and raising thorny philosophical and ethical questions. Both potential enhancements and constraints to its exercise demand thorough evaluation. Accordingly, its respect and promotion lie at the very core of many regulatory frameworks (Jobin et al. 2019; Floridi and Cowls 2019; Fjeld et al. 2020).

The domain of road transport presents important challenges at the intersection of autonomy and automation. In particular, the development of connected and automated vehicles (CAVs) is expected to revolutionize the role of human vehicle occupants in traffic decisions and actions (Michelfelder 2022; Jenkins et al. 2022; Fossa 2023). As the scope of human choice and agency shifts, threats to and opportunity for the exercise of autonomy require to be carefully assessed. From the perspective of engineering ethics, an inquiry into the effects of driving automation technologies on user autonomy is necessary to steer design decisions away from manipulatory or paternalistic outcomes and toward the support of users’ autonomous behavior.

The present paper aims at exploring the complex nature of ethical problems arising at the intersection of human autonomy and AI systems in the domain of driving automation. In a nutshell, it claims that the issue has been mainly tackled in the literature on a fairly general level, and mostly with reference to the controversial issue of crash-optimization algorithms. As a result, only limited design insights can be drawn from its study.Footnote 1 However, integrating ethical analysis and design practices is critical to pursue the implementation of such an important ethical value into CAV technologies. To this aim, it is argued, a more applied approach targeted at examining the impacts on human autonomy of current CAV functions should also be explored. As an example of the intricacy of this task, the case of automated route planning is discussed in some detail.

The paper is structured as follows. Section 2 introduces the general debate on the ambiguous effects of AI systems on human autonomy, showing the importance of nuanced analyses and setting the stage for the subsequent discussion. Section 3 tackles the literature on the impacts of CAVs on human autonomy and provides a critical assessment of its significance, suggesting that focusing primarily on current CAV functions might help take a first step towards the elaboration of viable design guidelines. Section 4 provides a preliminary example of a similar examination by discussing the case of automated route planning. Section 5 concludes the paper by offering some final remarks.

2 Human autonomy and AI systems

Before considering the literature dedicated to the prospected impacts of CAV technologies on human autonomous decision-making and agency, let us briefly introduce the debate on how AI systems influence this ethical value worthy of being protected and fostered.

Many commentators have examined and discussed the diverse repercussions of AI systems on human autonomy—the relational, situated, and multi-layered nature of which also requires to be duly factored in (e.g., Mindell 2015; Rubel et al. 2021; Tiribelli 2023).Footnote 2 In sum, two main and opposing effects have been noticed. On the one hand, AI systems can be said to contribute to supporting user autonomous decision-making and its translation into practice. In fact, they show the potential of enlarging our choice and action possibilities by both assisting us in navigating through complex decisional processes and freeing our hands from tasks we cannot or would rather not carry out. Moreover, they offer the possibility of improving the efficiency, effectiveness, and safety of many operations, which represents an indirect condition to the enjoyment of autonomy. In doing so, however, AI systems necessarily process information, compute decisions, and sometimes even implement courses of action on our behalf, thus bypassing our own judgment and constraining our agential possibilities. Therefore, the relationship between human autonomy and AI systems does not present a monolithic profile. Rather, it shows a multifaceted nature that calls for nuanced examinations.

Critical reflection on enhancing and constraining effects has accordingly characterized the literature on both AI systems influencing human decision-making and robotic applications acting on our behalf in the physical world.

For what concerns decision-making, considerable attention has been dedicated to recommendation systems (Prunkl 2022; Bonicalzi et al. 2023). Among others (Calvo et al. 2020; Laitinen and Sahlgren 2021; Rubel et al. 2021), a particularly insightful perspective on the multifaceted impacts of algorithmic tools on the scope of human autonomy has been proposed by John Danaher (2016, 2018, 2019). Danaher claims that algorithmic tools pose relatively new challenges to the exercise of user autonomy due to their pervasiveness, centralization, and targeting capacities. At least three dimensions call for accurate analysis in this sense (see also Raz 1986). First, algorithmic tools might impact our capacity to rationally choose the right means to our ends—the rationality condition. Second, they might foster or hinder our capacity to meaningfully access “an adequate range of options” (Danaher 2019: 105)—the optionality condition. Third, and finally, they might either support or frustrate our freedom to counteract unwanted coercion, encroachment, and manipulation—the independence condition.

While the rationality condition does not seem to be in substantial peril, Danaher notes, optionality and independence might indeed be threatened through the use of recommendation systems. By pre-filtering or drawing attention to given options, guiding decision-making through incentive schemes, or taking choices on our behalf, AI systems might negatively affect our autonomy, setting the stage for “algorithmic micro-domination”. Hence, the risk of variously nudging users against their will or manipulating them through recommendation systems should not be underplayed (Sunstein 2015; Vallor 2016: 188–207; Ienca 2023), as it has been stressed in the debate surrounding Yeung’s (2017, 2019) notion of “hypernudge”.

Nonetheless, it would be one-sided to conclude that recommendation systems can only be detrimental to human autonomy. Rightly tuned recommendation systems would enable us to delegate uninteresting decision-making tasks or analytical operations on huge amount of data we would feel inadequate to carry out. Given the right conditions, moreover, filtering and ordering options, nudging toward preferred outcomes, and automating choices might support user autonomy, challenged as it is by cognitive limitations, information overload, and continuous decision-making.Footnote 3 Besides, opting out of algorithmic tools might prove almost impossible in a world where they constitute an integral part of how we choose and behave. As many scholars further clarify (Danaher 2018; Milano et al. 2020; Calvo et al. 2020), recommendation systems co-shape how individuals access, make sense of, and act on digital information. Accepting their mediating role and learning how to live with it appear much more reasonable and promising than attempting to entirely separate oneself from it.

Given their ubiquity and significance, it is fundamental to ensure that recommendation systems are well-adjusted when released into society. What is critical, then, is to learn how to design, regulate, deploy, and use algorithmic tools so to enhance human rationality, optionality, and independence while mitigating related risks. To do so, challenges to the respect of human autonomy and self-determination should be explicitly and appropriately tackled. Building on similar considerations, Varshney (2020) stresses the need for operationalizing the value of human autonomy. The massive influence that recommendation systems can exert on decision-making processes, it is argued, requires to be experimentally assessed and carefully tamed by-design, so to leave enough space for the expression of user autonomy. As Calvo et al. (2020: 32) claim in their discussion of Youtube recommender system, “designing for autonomy is an ethical imperative to the future design of responsible AI”—but one that requires fine-grained, context-related, and multi-dimensional analyses to be properly carried out.

Even if to a lesser extent, the complex impacts of AI on human autonomy and the need for tackling related challenges have been studied also in relation to robotic systems acting on our behalf (Formosa 2021). Persuasive approaches based on captology and nudge theory (IJsselstein et al. 2006; Fogg et al. 2008; Siegel et al. 2009) have raised a heated debate on how to protect and support the autonomy of users interacting with robots. Borenstein and Arkin (2016) have discussed the ethical legitimacy of “robotic nudges”—i.e., of programmatically influencing user behavior by-design through robotic technologies. Being embodied and acting in the physical world, the persuasive potential of robots might be of massive help in situations where physical or cognitive limitations work against the exercise of autonomous decision-making and agency. Moreover, specific design choices might also serve wider socio-ethical goals, such as spreading (supposedly) socially beneficial behaviors. However, nudging people towards given decisions and actions poses obvious threats to user autonomy. For instance, design choices aimed at maximizing acceptance or conveying given socio-ethical values have been criticized as possible threats to user autonomy, even though they might support processes of moral growth (Weßel et al. 2021; Fossa 2022a). Illegitimate encroachments on users’ autonomous decision-making such as paternalism and manipulation are evident risks that roboticists and regulators have an obligation to minimize, in particular when vulnerable users are involved (Sparrow 2002; Sparrow and Sparrow 2006; Sharkey and Sharkey 2010).

An insightful examination of how robots—more precisely, social robots—could diversely impact on human autonomy has been recently provided by Formosa. As the author suggests, robotic technologies “could enhance and respect, as well as inhibit and disrespect, the autonomy of their users” (Formosa 2021: 596). The dual nature of prospected impacts is particularly evident in Formosa’s analysis, where potential benefits to the exercise of user autonomy are systematically associated with opposite inhibiting risks. On the bright side, through interactions with social robots users could set and pursue ends they deem  more valuable, improve competencies conducive to autonomy, and have access to more authentic choices. For example, tasks we deem dull and meaningless could be thus offloaded, so to gain time to engage in activities we value the most and that reinforce our sense of autonomy and self-respect. Moreover, robotic support could help us take decisions and act in greater accordance with our own convictions or consider all relevant information and reasons before making up our mind.

Symmetrically, however, human autonomy could also be threatened through social robots. These AI systems might make fewer valuable ends available; obstruct the development, maintenance, and cultivation of autonomy competencies; and push users toward less authentic choices. For instance, social robots might handle by default tasks we deem valuable and would like to carry out ourselves, without our knowledge. As a result, our autonomy competencies might wither, and the authenticity of the ensuing choices might be challenged. Finally, and most seriously, social robots might weaken or altogether disrespect human autonomy. By designing social robots to operate in deceptive, manipulative, and coercive ways, these technologies might incentive dependency and turning users into means to ulterior ends, thus severely threatening human dignity as well. As a result, particular care must be exercised in “design, regulation, and use” (Formosa 2021: 596) so that the ethical value of human autonomy is duly protected and promoted.

To summarize, the ethics literature on human autonomy and AI stresses the necessity to distinguish between technologies designed and deployed in ways that support human autonomous decision-making and agency, and technologies that risk manipulating, coercing, or overly constraining the scope of such critical components of human existence. However, it also acknowledges that separating ethically legitimate from illegitimate effects on user autonomy is rarely straightforward, and that controversial trade-offs between the right to individual autonomy and the pursuit of supposed social goods or values are inevitable. That being said, the literature stresses the central normative role of human autonomy as an ethical value variously connected to other pivotal principles such as responsibility, identity, dignity, and well-being (Laitinen and Sahlgren 2021; Tiribelli 2023). Even though off-loading to automated systems the burden that comes attached to human autonomy might appear alluring, as Chiodo (2023) argues, it would engender an ethically troublesome deterioration of human identity and dignity. As AI technologies increasingly co-shape human decision-making and agency, then, it is critical to protect and promote the exercise of user autonomy.

3 Human autonomy and driving automation

The debate on CAVs and human autonomy has also brought to the surface the inextricably dual nature of prospected impacts. Threats to and opportunities for human autonomy are so numerous and deeply entangled with each other that much philosophical work is needed to clarify how this value is to be specified and upheld in the context of driving automation.

Generally speaking, and much in line with the previous remarks, driving automation has been found to exert an ambiguous effect on human autonomy. On the positive side (e.g., Williams et al. 2020), delegating driving to CAVs is expected to allow for more free time and energies to pursue one’s own self-determined interests and goals. Lowering the psychological costs of driving, automation is also expected to support autonomous decision-making on matters that importantly impact on individual well-being—such as, for instance, where to live and where to work. Furthermore, driving automation would offer transport opportunities to cognitively or physically impaired people, elderly people, children, minors, and other social categories that as for today enjoy little or no access to it, thus improving their capacity of implementing autonomous choices (e.g., Goggin 2019).

However, and quite paradoxically, such benefits depend on the full automation of what Michon (1985) defined as operational and tactical driving decisions—i.e., decisions concerning how to handle vehicle controls and how to behave in traffic. In other words, driving systems must be capable of automatedly managing choices concerning, e.g., when to speed up (Smids 2018), when to slow down (Nyholm and Smids 2020), when to let other road users pass (Millard-Ball 2018), when to bend traffic rules for the greater good (Reed et al. 2021), and so on. This delegation evidently entails constraining user autonomous decision-making, sometimes in morally relevant ways.

A great deal of attention has been dedicated in this sense to the hotly debated problem of crash-optimization algorithms (e.g., Nyholm 2018; Dogan et al. 2020; Jenkins et al. 2022). If driving is to be fully automated, so are decisions concerning how to distribute risk among involved parties during unavoidable collisions (Goodall 2016). Whether delegating such decisions would support or limit human moral autonomy is controversial. On the one hand, crash-optimization algorithms would make it possible to ethically deal with situations that used to extend beyond the reach of human moral agency, thus expanding the general domain of human moral autonomy. On the other hand, implementing these algorithms would constrain the more specific domain of user moral autonomy in ways that might make some legitimate ethical choices impossible by-design and possibly amount to moral paternalism (Millar 2015; Gogoll and Müller 2017; Müller and Gogoll 2020). Therefore, some believe, design solutions should be implemented to allow users to exercise autonomy in adequate ways even when the management of unavoidable collisions is automated and delegated to CAVs (Millar 2016; Contissa et al. 2017).

Full driving automation, however, is not the only way to support human autonomy in road transport. Driving technologies such as Advanced Driving Assistance Systems (ADAS) or partial automation solutions also have a role to play. For instance, these technologies could help drivers better manage physiological and psychological constraints to autonomy by automating emergency functions (e.g., emergency braking) or providing valuable information concerning driving behavior (e.g., lane changing warning and fatigue detection systems). Moreover, offering drivers valuable traffic information, as happens with smart intersections and smart roads, might also be construed as assisting humans in exercising autonomy behind the wheel. However, these forms of assistance clearly presuppose the involvement of a vehicle occupant in the execution of the driving task, thus blocking the enjoyment of the autonomy-enhancing effects discussed in the previous paragraph. At the same time, unclear or cumbersome frameworks of shared control over driving tasks might generate mode confusion or inadequate degrees of user reliance, leading to situations where human autonomous decision-making and agency is impeded (Hancock 2019; Bellet et al. 2019).Footnote 4

Interestingly enough, the same ambiguity can be identified on the regulative side as well. In the case of Europe, the 2020 report Ethics of Connected and Automated Vehicles. Recommendations on Road Safety, Privacy, Fairness, Explainability and Responsibility (Horizon 2020) establishes an ethical framework for CAVs and offers twenty recommendations aimed at guiding stakeholders in the effort of aligning driving automation technologies to relevant ethical values. Following the lead of many other frameworks (Floridi et al. 2018; HLEGAI 2019; Jobin et al. 2019), the report grants particular recognition to the value of autonomy as one of the eight overarching ethical principles for driving automation (Santoni De Sio 2021). According to it, human beings are to be conceived as “free moral agents” (Horizon 2020: 22) whose right to self-determination ought to be respected. The importance of autonomy reverberates on several recommendations, ranging from the protection of privacy rights and the promotion of user choice to reducing opacity and enhancing explainability. The principle of personal autonomy, then, demands that CAVs are so designed to “protect and promote human beings’ capacity to decide about their movements and, more generally, to set their own standards and ends for accommodating a variety of conceptions of a ‘good life’” (Horizon 2020: 22). As argued by Santoni de Sio and Fossa (2023), however, supporting both autonomous decision-making about driving decisions and the autonomous pursuit of different conceptions of a ‘good life’ through mobility is hard to achieve, since these two specifications of autonomy point towards seemingly incompatible technological pathways.Footnote 5

In sum, insofar as CAV technologies show the capacity to influence or bypass human decision-making and agency in the traffic context, driving automation too poses complex challenges to the respect and enhancement of user autonomy. On the one hand, human autonomous decision-making and agency conducive to well-being could be enhanced by CAVs, which promises more inclusivity and more meaningful time management at the expenses of the exercise of autonomy with reference to driving decisions. On the other hand, CAV technologies could enhance human autonomous decision-making and agency through the implementation of ADAS and partial automation, which aim at improving driving behavior by limiting the effects of regular drivers’ physical and cognitive constraints. However, these solutions presuppose the presence of a driver, which would exclude the enjoyment of the autonomy-enhancing benefits so often associated with driving automation.

These general reflections are useful to understand the impacts of driving automation on human autonomy. However, they provide little practical insights to engineers involved in the development of CAV technologies. Transitioning from the abstract acknowledgment of autonomy to more practical endorsements is critical to ensure that driving automation is pursued in alignment with what the value of autonomy demands. As such, it represents a clear mission of responsible design (Morley et al. 2021, 2023)—and one that has been clearly acknowledged in the field of driving automation (Gerdes and Thornton 2015; Thornton et al. 2017; Gerdes et al. 2019; Millar et al. 2020). However, the generality of the discussion reviewed above, paired with its main application to the controversial issue of crash-optimization algorithms—that many believe too abstract to be relevant (e.g., Davnall 2020; De Freitas et al. 2020)—offers only limited guidance to the task of translating general calls for the respect and promotion of human autonomy into more actionable design guidelines. Next to this discussion, it is suggested, current CAV technologies and their impacts on user autonomy should also be assessed with the aim of raising an interdisciplinary debate centered on design strategies and best practices. Given the importance of tackling this side of the problem as well, the rest of the paper shifts the attention to a more applied discussion intended at showing the relevance and intricacy of design issues surrounding the integration of the value of autonomy into key components of current CAVs.

As a first step in this direction, a sharper focus on given CAV functions might help bringing theory and practice closer to each other. A hint in this direction can be drawn from (Horizon 2020: 48), where the authors propose to structure reasoning by first assessing the ethical relevance of “CAV applications of algorithm and/or machine learning based operational requirements and decision-making”. Considering the effects of specific CAV functions on human autonomy might help anchor the analysis to given design and deployment contexts, thus providing precise starting points for a discussion of technical requirements. Building on similar considerations, a function-based working approach has been recently proposed with the intention of supporting driving automation practitioners in the operationalization of ethical values (Fossa et al. 2022). As a first step, the methodology suggests determining whether the technological function under examination relevantly impacts on (how many of) the eight ethical principles advanced in the European report.Footnote 6 If F is the function under assessment and the principle of autonomy is considered, the first questions to ask would then be: “Should F remain under user control for personal autonomy to be respected in high-level automation?” (Fossa et al. 2022: 7).

Answering this question is critical to inform subsequent design choices, but also extremely difficult due to the multi-layered analyses it requires. Initially at least, a theoretical examination might be useful to trace the general contours of the discussion. In this spirit, and as a way to show the intricacy of the task at hand, the next section is dedicated to a preliminary theoretical exploration of the possible impacts on human autonomy of an important function within the scope of driving automation: automated route planning. Building on what has been showed in previous sections, the next pages will hopefully contribute to developing a clearer understanding of the challenges that await any attempt to design CAV automated route planning (and possibly many other automated driving tasks) according to the ethical value of autonomy. The identification of pitfalls and difficulties is intended to count not as a dismissal of what can be achieved through design ethics approaches, but rather as an opportunity to kickstart a participated conversation on the issue. Indeed, an accurate representation of the level of difficulty presented by a challenge already marks a step toward its responsible management (Siegel and Pappas 2023).

4 An example: automated route planning

Planning routes from points of origins to destinations is an eminent component of traveling experiences. Tools—e.g., compasses, quadrants, maps—have always been playing an essential part in it. With GPS and digital maps, navigation systems have made it possible to delegate route planning to artificial systems capable of computing various options on our behalf according to pre-set criteria. Automated route planning is of course essential to driving automation too. Understanding whether the automation of route planning is relevant vis-à-vis the ethical value of autonomy is necessary to align CAVs to legitimate moral expectations and, thus, build social trust in the technology.

Interestingly, the automation of route planning through navigation systems—that arguably belong to the class of recommendation systems discussed in Sect. 2—has already raised discussions concerning the impacts on human autonomy. Consider, for example, Nickel et al. (2010). As a way to inquire into the controversial relation between trust and technology, Nickel and colleagues introduce a fictional case study to explore how navigation systems reshape and co-shape human autonomous behavior in the practical context of transportation. The case study presents a situation in which the criteria applied by the planning algorithm to compute the best route fail to reflect the (changing) needs of the driver. Having delegated route planning to the navigation system, however, the driver relies on the route recommendation provided. As a result, she is led to a route she would not have taken otherwise—so, in a sense, against her will—and that turns out to disrupt her plans.Footnote 7

Indeed, misalignment between system settings and user preferences might be opaque to users or generally difficult to realize. As a consequence, the delegation of route planning to navigation systems might turn from supporting user autonomous mobility to overlooking human decision-making in ways that might be perceived as illegitimate. The question of user reliance on the performance of the navigation systems and their ability to adapt to drivers’ needs and values is, therefore, key to understand the impact of system recommendations on user autonomy. Even though users are those who eventually determine what route is taken, the ways in which navigation systems are designed and used importantly co-shape this decision-making process, thus redefining the scope of user autonomy.

More recently, Frischmann and Selinger (2018: 81–101) have also offered some noteworthy considerations on the impacts of navigation systems on user autonomy. Navigation systems are here discussed as an example of “mind-extending technologies”—i.e., technologies to which cognitive tasks are delegated. While the liberating and empowering effects on mobility choices and practices are difficult to deny, less evident constraints to human autonomy in terms of intrusive nudges (e.g., to speed up so to beat the prospected time of arrival), manipulative geographically targeted advertising, navigation deskilling, and spatial awareness loss must also be attentively factor in to paint a clear picture of how this technology reshapes human autonomy in navigating the world.

Importantly, Frischmann and Selinger notice that even though it is the user who decides whether to use the technology, it should not be ignored the fact that user autonomy is at least challenged by the intentions, biases, plans, and interests of those who design it. Therefore, the scope of user autonomy can be appropriately measured only by taking into due consideration the mediating role of the technology and the wider context in which its use is inscribed.

The previous observations suggest that there are reasons to consider automated route planning as a relevant function vis-à-vis human autonomy even when it comes in the technological form of navigation systems. Its implementation in CAVs arguably corroborates this claim. Indeed, the impacts on human autonomy are much more tangible in the case of driving automation. Compared to usual navigation systems, CAVs present a further aspect: they apply route planning on our behalf, entirely bypassing our judgment concerning how to follow the recommended route. Navigator systems do not exert any direct control on the steering wheel. Even though automation biases might make it difficult to critically assess or reject route recommendations,Footnote 8 users do retain the possibility of choosing otherwise. This possibility is considerably reduced with CAVs, where routes are not just computed but, once selected, seamlessly turned into practice. This addition represents a crucial novelty. Route planning through navigation systems only partially automates the transportation task of going from A to B. Route planning through CAVs automates the entire task, reshaping even more substantially the scope of user autonomy.

Some hints suggesting that automated route planning in CAVs should be considered as a relevant function with reference to the value of user autonomy can be found in the literature. For instance, Danaher (2019: 106, 109) briefly refers to route planning in the essay quoted in Sect. 2. Moreover, the authors of (Horizon 2020: 41) recommend to “support user empowerment in (…) choosing routes”. However, an analysis dedicated to measure the full scope of how the automation of route planning in CAVs might reshape human autonomy is yet to be carried out. The following lines intend to offer a contribution to this issue by clarifying what is at stake in terms of autonomy when route planning is automated through CAVs.

This case too is characterized by ambiguous impacts. The automation of route planning in CAVs evidently entails a constraint in users’ “capacity to decide about their movements” (Horizon 2020: 22). Indeed, their judgement concerning what route to take is mediated by the system, which computes and apply the best route on their behalf. This restriction on autonomous decision-making concerning route planning, however, can concurrently be said to enhance user autonomy in at least two ways. First, it allows vehicle occupants to redirect their energies and attention to what matters to them the most, thus supporting their ability to “set (their) own standards and ends” in the pursuit of their conception of a “good life” (Horizon 2020: 22). By taking care of selecting the best route and driving, CAVs support users’ needs and desires on how to best occupy the time they spend en route. In this way, automated route planning removes a constraint to the users’ autonomous organization of their own time by giving them the possibility of deciding by themselves how to spend it. Going back to Formosa’s framework, all this seems to enhance the pursuit of more valuable ends and more authentic choices on the user part.

Arguably, automated route planning might be said to foster user autonomy also with reference to Danaher’s optionality condition. The possibility of utilizing a driving system capable of navigating through spaces with which their users are not familiar might importantly expand their mobility options. Through this function, users might become capable of reaching critical destinations—e.g., hospitals—without having previous knowledge of their location, and without having to worry about taking the wrong turn. More in general, the possibility of delegating route planning to CAVs might be expected to increase user confidence in moving throughout the road network, thus increasing the range of available options.

Finally, automated route planning—at least in principle—serves users’ autonomy by actualising their intention better than they could. In Danaher’s terms, the technology could be said to enhance the users’ rationality condition, i.e., their capacity to “plan and execute complex intentions” (Danaher 2019: 105). If we suppose—and there is little reason not to—that CAV users’ intention is to get to their destination as quickly and smoothly as possible, avoiding traffic jams and unexpectedly closed streets, then the automated route planning function can count on much more information to do so most effectively. It is safe to hypothesize that delegating route planning sensibly enhances users’ skills in avoiding congestion and other time-consuming nuisances. In this sense, alignment between users’ main intentions and system performances could be said to foster their autonomy: to provide a powerful means for translating their plans into practice, even if mediately.

However, this last point is hardly generalizable. Indeed, high-level alignment between self-determined user preferences and automated route planning in CAVs might generate misalignment on a finer level of granularity. Many personal reasons, even ethically relevant ones, might influence the roads we decide to take. Bypassing these decisions by delegating route planning to CAVs might have a relevant impact on the exercise of user autonomy and lead to what Formosa terms as less authentic choices. Once a destination is set, automated route planning programs compute multiple strategies and select the most convenient—i.e., the one that maximizes the parameters that programmers have selected to represent various constraints and costs. In the pursuit of high-level transportation goals according to high-level specifications—such as reducing travel time and avoiding traffic jams—more detailed and context-related constraints might be overlooked. For instance, as the author of (Horizon 2020: 42) briefly consider, CAVs could compute and drive along routes that “result in personal data collection that the user could not anticipate from the outset, to which they have not consented, and of which they may never become aware” (Horizon 2020: 42). The fact that automated route planning could expose users to privacy infringements they would have avoided, had they had the chance to do so, seems to point to a possible violation of human autonomy designers should take into account and possibly manage.

Further threats to autonomy might come from considering how automated route planning might be bent to serve the interests of a wider set of stakeholders. Consider, for example, how targeted advertising could be paired with information about planned routes and localization to make potential customers drive by given shops, restaurants, and other commercial activities (Glancy 2012; Hansson et al. 2021; Mulder and Vellinga 2021). If some users might embrace this form of advertising, other might perceive it as intrusive and manipulatory—i.e., as an illegitimate encroachment on a domain that should fall under the purview of their autonomy. Deciding whether to be exposed to commercial advertising, and deciding whether routes should be planned also according to this criterion, fall into the purview of autonomous decision-making concerning road transport. Bypassing users’ judgment without their explicit consent would amount to violate Danaher’s independence condition and pave the way for instances of algorithmic micro-domination, particularly when it exploits users’ psychological vulnerabilities constituting what Rubel et al. (2021: 105–109) define as “affective challenges” to human autonomy (Fossa 2023: 57–59).

Similar reflections offer a starting point for a discussion of design solutions aimed at delivering the benefits of automated route planning while minimizing the related risks. In the cases of both unwanted personal data collection and location-based targeted advertisement, threats to user autonomy mainly stem from misalignment between system settings and user preferences.Footnote 9 Indeed, if users were given the possibility of personalizing criteria for route planning, and if reliable information about digitally monitored roads were publicly available, they could autonomously choose whether to include these stretches of road among the ones taken into consideration by the system. Similarly, seeking user consent to location-based targeted advertising on CAV in explicit, fair, and understandable ways through system preference settings might help respect user independence without impeding interested users to enjoy the service.

Perhaps, then, an interface aimed at allowing users to specify route planning preferences would help strike a better balance between the decision-making power we retain for ourselves and that which we delegate to artificial agents” (Floridi et al. 2018: 698). Indeed, Calvo and colleagues (2020: 45) stress the importance of interface design to empower user control (and, thus, autonomy) by arguing that “design for autonomy-support in this interface sphere is largely about providing meaningful controls that allow users to manipulate content in ways they endorse”. Accordingly, Kun et al. (2016: 37) claim that the most relevant challenges for interface design in driving automation have precisely to do with assuring that the user “retains autonomy at the desired level”. By designing interfaces that allow users to make sure that automated route planning is carried out in alignment with their own criteria, threats to optionality, independence, and authenticity could be more explicitly brought to the awareness of users and managed by-design.

The idea of fostering optionality and independence by allowing users to set their automated route planning system according to their preferences also exhibits some limitations. For instance, user preferences in terms of route choices might reveal themselves the moment the traffic situation makes them relevant, or shifts depending on context. For example, a user might be willing to accept her routes to be generally computed based also on data concerning her consumer behavior, but not when she is late for work or the day after she went shopping. Similarly, a user might prefer not to be driven through a neighborhood she considers unsafe, but not in the case of an emergency trip to the hospital or if she is late for an important meeting. Were these users behind the wheel, they could exercise their autonomy directly in ways system settings could hardly reproduce. Situations of misalignment, then, are likely to occur even if a wide range of user settings can be implemented in the system.Footnote 10 Trade-offs between autonomy-enhancing and constraining aspects of automated route planning seems likely to represent a condition of CAV technologies, rather than a fixable bug.

Finally, but most importantly, these considerations remind that the effort of designing automated route planning systems that respect and promote user autonomy does not occur in a moral vacuum. Other values are relevant to driving automation and calls for adequate consideration. Pursuing user autonomy through design choices and regulative measures is likely to affect how legitimate claims based on other ethical values can be accommodated. The ethical design, development, and use of ‘trustworthy’ CAVs must be sought with as many relevant ethical values in mind as possible. Otherwise, unexpected side effects conveying value hierarchies that defy rational or social support would likely lead to rejection and negatively affect people’s trust.

In this perspective, it might be the case that on some occasions the value of human autonomy should take a back seat. Indeed, there might be strong, ethically relevant reasons to delegate route planning to CAVs, even though some constraints in terms of user autonomy are to be expected. For instance, automating route planning might lead to remarkable collective advantages in terms of traffic efficiency and environmental sustainability. Automating route planning according to shared parameters could contribute to minimizing uncertainties and making vehicle behavior more predictable, which would enable a more optimized and flexible management of traffic fluxes (e.g., Friedrich 2016). In case of need, centralized traffic management could optimize the distribution of CAVs on the road network, ultimately improving traffic efficiency, minimizing congestion, and ensuring optimal use of the available road infrastructure. Moreover, it would allow prioritizing routes characterized by minimum energy consumption and the use of less busy routes where smoother driving could be adopted, which would impact positively on the environment while lowering vehicle wear and tear (e.g., Barth et al. 2014). Fine-grained user control over the system preferences of CAV automated route planning would substantially limit the reach of concerted traffic management and the accomplishment of the prospected social benefits.

The latest remarks raise another objection to the idea of providing users with the possibility of exercising fine-grained control on automated route planning through system settings. So far, benefits and threats to autonomy have been mostly discussed by reference to individual users and the related independence, optionality, and rationality conditions. However, collective benefits that might ensue from a centralized management of traffic—for instance, in terms of environmental sustainability—shed a different light on the importance of user autonomy as an ethical value vis-à-vis other noteworthy moral objectives. That being said, the effects on human autonomy of centralized traffic management also require to be thoroughly evaluated. Indeed, this form of traffic control could be opposed by pointing to potential threats to user autonomy in terms of privacy infringements and surveillance risks. Moreover, cybersecurity risks involved in centralizing traffic management would require to be attentively evaluated. Balancing legitimate claims and striking acceptable trade-offs become unavoidable when the intersection between ethics and technology is acknowledged in its full complexity. The value of autonomy cannot be pursued in isolation from the wider ethical framework of driving automation. A full-fledged ethical analysis of automated route planning, then, must determine how to support user autonomy while also pursuing other ethical objectives relevant to the context of driving automation.

5 Conclusion

To conclude our discussion, there is little doubt that automated route planning implemented in CAVs will reshape and co-shape user autonomy in the context of road transport. On the one hand, opportunities to respecting, protecting, and empowering autonomous behavior are easily detectable. On the other hand, pursuing these mobility benefits might lead to situations where roads are planned according to criteria that are not aligned with those of the users, which could be perceived as an illegitimate encroachment on their autonomy. Coping with these issues by supporting user choice in route planning through dedicated interfaces and settings can only go so far, offering only partial assurance that situations of misalignment will not arise. Moreover, the consideration of other potential ethical benefits that might ensue from automating route planning calls for the establishment of a clear value hierarchy for trustworthy driving automation to be practically pursued.

As a result, the analysis has confirmed that automated route planning is a CAV function that should be designed with an eye to the respect, protection, and promotion of human autonomy. However, the ways in which this automated function could reshape our autonomy leave many doubts on how to properly answer this obligation. In analogy with many other AI applications, this case too has exhibited an ambiguous net of opportunities and threats that are extremely difficult to disentangle. Even though the problem remains open, the contours of the challenges to be faced are now clearer. The road to ethically adequate, trustworthy AI technologies is paved with such difficult, multifaceted, and nuanced issues. Their interdisciplinary exploration and discussion is critical to accomplish on the field what has been theoretically acknowledged as a fundamental ethical objective.

Finally, the outcome of the analysis shows that design practices and solutions only go so far in managing ethical problems raised by AI systems such as CAVs. On the contrary, design teams can play their part in the effort of realizing trustworthy technologies only if the same objective is consistently pursued along with the other stakeholders of driving automation. Identifying relevant individual and social values, proposing and debating value hierarchies, translating them into design requirements, enforcing their respect, validating and auditing technological products accordingly, regulating their deployment and use, and so on, are all necessary ingredients of a mission that extends far beyond design practices to involve the whole socio-technical system of driving automation. As Stilgoe (2018, 2020) and Santoni de Sio (2021) suggest, resisting the simplification of technological solutionism and remaining aware of the social complexity of the task at hand is critical not to underestimate the actual size of the challenge.