1 Introduction

The development, deployment and use of digital technologies has long been recognised as having ethical implications. Based on initial reflections of those implications by seminal scholars such as Wiener [122], [123] and Weizenbaum [121], a stream of research and reflection on ethics and computers emerged. The academic field arising from this work, typically called computer ethics, was and remains a thriving but nevertheless relatively small field that managed to establish a body of knowledge, dedicated conferences, journals and research groups.

While computer ethics continues to be a topic of discussion, the dynamics of the ethical reflection of digital technology changed dramatically from approximately the middle of the 2010s when the concept of artificial intelligence (AI) (re-)gained international prominence. The assumption that AI was in the process of fundamentally changing many societal and business processes with manifest implications for most individuals, organisations and societies led to a plethora of research and policy initiatives aimed at understanding ethical issues of AI and finding ways of addressing them.

The assumption underlying this paper is that one can reasonably and transparently distinguish between the discourses on computer ethics and the one focusing on the ethics of AI. If this is the case, then it would be advantageous to participants in both discourses to better understand the differences and similarities between these two discourses. This paper, therefore, asks the research question: how and to what extent do the discourses of computer ethics and the ethics of AI differ from one another?

The paper is furthermore motivated by a second assumption, which is that ethical reflection of digital technologies will continue to develop and that there will be future discourses, based on novel technologies and their applications that go beyond both computer ethics and the ethics of AI. If this turns out to be true, then an understanding of the commonalities and persistent features of computer ethics and the ethics of AI may well provide insights into likely ethical concerns that can be expected to arise in the next generation of digital technologies and their applications. The second question that the paper seeks to answer is, therefore: what can be deduced about a general ethics of digital technologies by investigating computer ethics and the ethics of AI?

These are important questions for several reasons. Answering them facilitates or improves mutual awareness and understanding of computer ethics and ethics of AI. Such an understanding can help both discourses identify current gaps existing ideas. For computer ethics scholars, this may be an avenue to contribute their work to the broader societal discourse on AI. For scholars involved in the ethics of AI debate, it may help to avoid repetition of settled discussion. But even more importantly, by comparing computer ethics and the ethics of AI, the paper can think beyond current discussions. A key contribution of the paper is the argument that an analysis of computer ethics and the ethics of AI allows for the identification of those aspects of the discourse that remain constant and are independent from specific technologies. The paper suggests that a weakness of both computer ethics and the ethics of AI is their focus on a particular technology or artefact, i.e. computers or AI. It argues that a better understanding of ethical issues can be achieved by taking seriously the systems nature of digital technologies. One stream of research that has not been prominent in the ethics-related debate is that of digital (innovation) ecosystems. By moving away from an artefact and looking at the ethics of digital ecosystems, it may be possible to proactively engage with novel and emerging technologies while the exact terminology to describe them is still being developed. This would allow for paying attention early to the ethical aspects of such technologies.

The paper proceeds as follows. The next section summarises the discourses on computer ethics and on the ethics of AI with a view to identifying both changing and constant aspects between these two. This includes a justification of the approach and a more detailed description of aspects and components of the discourses to be compared. This provides the basis for the description and critical comparison of the two discourses. The identification of overlaps and continuity provides the starting point for a discussion of a future-proof digital ethics.

2 Computer ethics and the ethics of AI

The argument of the paper rests on the assumption that one can reasonably distinguish between computer ethics and the ethics of AI. This assumption is somewhat problematic. A plausible reading is that the ethics of AI is simply a part or an extension of computer ethics. This paper therefore does not propose any categorical difference between computer ethics and the ethics of AI but simply suggests that it is an empirical phenomenon that these two discourses differ to some degree.

One argument that supports a distinction between computer ethics and the ethics of AI is the level of attention they receive. While many of the topics of interest to computer ethics, such as privacy, data protection or intellectual property, have raised societal and, thus, political interests, this has never led to the inclusion of computer ethics terminology into a public policy discourse. This is very different for the ethics of AI, which is not just a thriving topic of academic debate, but which is explicitly dealt with by numerous policy proposals [104]. A related aspect of the distinction refers to the participants in the discourse. Where computer ethics is to a large extent an academic topic, the ethics of AI draws much more on contributions from industry, media and policy.

This may suffice as a justification for the chosen approach. The validity of these observations are discussed in more detail below. The following Fig. 1 aims to represent the logic of the research described in this paper.

Fig. 1
figure 1

Representation of the research logic of the paper

The two blue ellipses on the left represent the currently existing discourses on computer ethics and the ethics of AI. The differences and similarities between these two are explored later in this section. From the insights thus generated the paper will progress to the question what can be learned about these existing discourses that can prepare the future discussion of the ethics of emerging digital technologies.

2.1 Methodology

The methodological basis of this paper is that of a literature review, more specifically of a comparison of two bodies of literature. Literature reviews are a key ingredient across all academic disciplines [42] and form at least part of most publications. There are numerous approaches to reviewing various bodies of literature that serve different purposes [115]. Rowe [106] suggests a typology for literature reviews along four different dimensions (goal with respect to theory, breadth, systematicity, argumentative strategy).

A central challenge for this paper is that the distinction between computer ethics and the ethics of AI is not clear-cut, but rather a matter of degree and emphasis. This is exacerbated by the fact that the terminology is ambiguous. So far, I have talked about computer ethics and the ethics of AI. Neither of these terms is used consistently. While the term computer ethics is well established, it is closely linked with other such as ethics of ICT [105], information technology ethics [110] or cyberethics [111]. Computer ethics is closely related to information ethics to the point where there are several publications that include both terms in the title [56] and [120]. The link between computer ethics and information ethics is discussed in more detail under the scope of the topic below.

Just like there are different terms that overlap with computer ethics, there are related terms describing ethics of AI, such as responsible AI [15, 38, 45, 118] or AI for good [17, 69]. In addition, the term ethics is used inconsistently. It sometimes refers to ethics as a philosophical discipline with references to ethical theories. However, it often covers ad hoc concerns about particular situations or developments that are perceived as morally problematic. Many such issues could be equally well described as social concerns. Many of them also have a legal aspect of them, in particular where they pertain to established bodies of law, notably human rights law. The use of the term 'ethics' in this paper, therefore, is a short hand for all these uses in the discourse.

The comparison of the discourses on computer ethics and ethics of AI, thus, requires criteria that allow to determine the content of the two discourses. An important starting point for the delimitation of the computer ethics discourse is the fact that there are several published accounts that review and classify this discourse. These notably include work undertaken by Terry Bynum [27,28,29] but also other reflective accounts of the field (H. T. [117]. There are several seminal publications that deserve to be mentioned as defining the discourse of computer ethics. Jim Moor [93] notably asked the question "what is computer ethics?". And Deborah Johnson [73] provided the answer in the first textbook on the topic, a work that was also initially published in 1985. The description of computer ethics in this paper takes its point of departure from these defining publications. It also takes into account other sources which include a number of edited volumes, work published in relevant conferences (notably Computer Ethics Philosophical Enquiry (CEPE), Computers and Philosophy (CAP) and ETHICOMP) but also published accounts of ethics of computing in adjacent fields, such as information systems or computing [113].

The debate on the ethics of AI is probably more difficult to delineate than the one on computer ethics. However, there are some foundational texts and review articles that can help with the task. Müller's [97]recent overview in the Stanford Encyclopedia provides a good overview. There are several review and overview papers, in particular of ethical principles [54, 72]. In addition, there is a quickly growing literature covering several recent monographs [41, 45] and several new journals, including the new Springer journal AI and Ethics [84]. These documents can serve as the starting point to delineate the discourse, which also covers many publications from neighbouring disciplines as well as policy and general media contributions. It should be clear that these criteria do not constitute a clear delineation and there will be many contributions that could count under both headings and some that may fit neither. However, despite the fuzziness of the demarcation line, this paper submits that a distinction between these discourses is possible to the point where it allows a meaningful comparison.

In order for such a comparison to be interesting, it requires a clarification of which aspects one can expect to differ, which is the subject of the following section.

2.2 Differences between computer ethics and the ethics of AI

This section starts with an overview of the aspects that are expected to differ between the two discourses and then discusses each of these in more detail The obvious starting point for a comparison of the discourses on computer ethics and the ethics of AI is the scope of the discourse, in particular the technologies covered by it. This leads to the topics that are covered and the issues that are of defining interest to the discourse. The next area is the theoretical basis that informs the discourse and the reference disciplines that it draws from. Computer ethics and the ethics of AI may differ on the solutions to these issues and the mitigation strategies they propose. Finally, there is the question of the broader importance and impact of the discourses.

Figure 2 represents the different aspects of the two discourses that will now be analysed in more detail.

Fig. 2
figure 2

Characteristics of the discourse

2.2.1 Scope: technology and its features

The question of the exact scope of both discourses has been the subject of reflection within the discourse itself and has varied over time. The early roots of computer ethics as represented by Wiener's [122] work was inspired by the initial developments of digital computing and informed by his experience of contributing to these during the Second World War. Wiener observed characteristics of these devices, such as an increased measure of autonomy and independence from humans which he saw as problematic. Similarly, Weizenbaum's [121] experience of natural language processing (an area that forms part of AI) led him to voice concerns about the potential social uses of technology (such as the ELIZA conversational system).

By the time the term "computer ethics" was coined in the 1980s, mainframe computers were already well established in businesses and organisations, and initial indications of personal computer use could be detected. The Apple II was launched in 1977, the BBC Micro and the IBM 5150 came to market in 1981, paving the way for wide-spread adoption of PCs and home computers. At this time, it was reasonably clear what constituted a computer and the discourse, therefore, spent little time on definitions of underlying technology and instead focused on the ethically problematic characteristics of the technology.

The initial clarity of the debate faded away because of technical developments. Further miniaturisation of computer chips, progress in networking, the development of the smartphone as well as the arrival of new applications such as social media and electronic commerce radically changed the landscape. At some point in the 2000s, so many consumer devices had integrated computing devices and capabilities that the description of something as a computer was no longer useful. This may explain the changing emphasis from the term "computer ethics" to "information ethics", which can be seen, for example, by the change of the title of Terry Bynum's [29] entry in the Stanford Encyclopedia of Philosophy which started out in 2001 as "Computer Ethics: Basic Concepts and Historical Overview" and was changed in 2008 to "Computer and Information Ethics". The difference between computer ethics and information ethics goes deeper than the question of technology and we return to it below, but Bynum's changed title is indicative of the problem of delimiting the scope of computer ethics in the light of rapid development of computing technologies.

The challenges of delimiting computer ethics are mirrored by the challenge of defining the scope of the ethics of AI. The concept of AI was coined in 1956 [88] in a funding proposal that was based on the conjecture that " every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it". It set out to explore " how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." These ambitions remain largely intact for current AI research, but they do not explain why ethics of AI became a pervasive discourse from the mid-2010s.

The history of AI (cf. [19]) includes a history of philosophical and ethical questions [31]. AI is a field of research, generally accepted to be a sub-field of computer science that developed several themes and bodies of theory, which point to different concepts of AI. Shneiderman [107] suggests a simple distinction between two goals of AI that is helpful to understand the conceptual challenge faced by the ethics of AI discourse. The two goals that Shneiderman sees for AI are: first, emulation to understand human abilities and then improve on them and, second, the application of technical methods to develop products and services. This distinction of goals aligns well with the well-established distinction between narrow and strong or general AI. Narrow AI aims to fulfil specifically described goals. In recent years, it has been hugely successful in the rapidly developing sub-field of machine learning [10], based on the implementation of deep learning through artificial neural networks and related technologies [114]. Narrow AI, in particular as realised in machine learning using neural networks to analyse and learn from large datasets, has roots going back decades. However, it is widely believed that these well-known technologies came to the fore because of advances in computing power, development of algorithms and the availability of large datasets [21, 65].

In addition to this narrow AI aimed at solving practical problems, there is the long-standing aim to develop technologies with human-like abilities. These systems would be able to transfer learning across domains and are sometimes called artificial general intelligence [41]. Artificial general intelligence forms part of the earliest attempts to model intelligent behaviour through symbolic representations of reality [94], sometimes referred to as good old-fashioned AI or GOFAI [55]. It remains contested whether artificial general intelligence is achievable and, even if so, whether it could be done using current technological principles (i.e. digital computers and Turing machines) [56].

There are attempts to interpret the difference between narrow and general AI as a difference in temporal horizon, with narrow AI focusing on short-term goals, whereas general AI is seen as a long-term endeavour [13, 32]. Notwithstanding the validity of this interpretation, the inclusion of narrow and general AI in the discussion means that its technical scope is large. It includes well-understood current technologies of machine learning with ethically relevant properties (e.g. need for large datasets, opacity of neural networks) as well as less determined future technologies that would display human-like properties. This breadth of the technical scope has important consequences for possible issues arising from the technology, as will be discussed below.

2.2.2 Topics and issues

The topics and issues discussed by both discourses cover all aspects of life where computers or AI have consequences for individuals and groups. It is, therefore, beyond the scope of this paper to provide a comprehensive overview of all topics discussed. Instead, the aim of this section is to provide an indication of some key topics with the aim of showing which of them have changed over time or remained stable.

In the introduction to 1985 special issue on computer ethics of the journal Metaphilosophy, the editor [46] stated that the central issue of computer ethics would be the replacement of humans by computers, in particular in tasks requiring judgment. It was clear at the time, however, that other issues were relevant as well, notably invasions of privacy, computer crime and topics related to the way computer professionals deal with clients and society, including ownership of programmes, responsibility for computer errors and the structure of professional codes of ethics. This structure is retained in the 2001 version of Bynum's [29] encyclopaedia entry which lists the following issues: computers in the workplace, computer crime, privacy and anonymity, intellectual property, professional responsibility, globalisation and the metaethics of computer ethics. Picking up the discussion of ethics of computing in the neighbouring discipline of information systems, Mason [87] proposed the acronym PAPA to point to key issues: privacy, accuracy, property and accessibility.

A more recent survey of the computing-oriented literature suggests that the topics discussed remain largely stable [113]. It may, therefore, not be surprising that there is much continuity from computer ethics in the ethics of AI debate. One way to look at this discussion is to distinguish between issues directly related to narrow AI, broader socio-technical concerns and longer-term questions. Current machine learning approaches require large datasets for training and validation and they are opaque, i.e. it is difficult to understand how input gets translated into output. This combination leads to concerns about privacy and data protection [26, 47] as well as the widely discussed and interrelated questions of lack of transparency [3, 109], accountability, bias [34] and discrimination [96]. In addition, current machine learning systems raise questions of reliability, security [7, 10, 25] and safety [45].

The impact of AI-enabled socio-technical systems on society and communities is covered well in the discourse. AI is a key enabler of the digital transformation of organisations and society which may have significant impact with ethical relevance. This includes economic concerns, notably questions of employment [77, 124] as well as labour relationships including worker surveillance [97] as well as concerns about justice and distribution [96]. Digital transformation can affect political power constellations [98] and support as well as weaken citizen participation. Possible consequences of use AI include changes to the nature of warfare [103] and environmental impacts [99]. Concerns are raised about how machines may enhance or limit human agency [18, 40].

Two concepts that figure prominently in the AI ethics discourse are those of trust and trustworthiness. The AI HLEG's [6] structured its findings and recommendations in a way that seems to suggest that ethics is considered, to strengthen trustworthiness of AI technologies, which then engenders trust and, thus, acceptance and use. This functional use of ethics is philosophically highly problematic but seems to be driven by a policy agenda that sees the desirability of AI as an axiom and ethics as a means to achieve targets for uptake.

Finally, there is some debate about the long-term issues related to artificial general intelligence. Due to the open question whether current type of technologies can achieve this [108], it is contested how much attention should be given to questions such as the singularity [80], superintelligence [22], etc. These questions do not figure prominently in current policy-oriented discussions, but they continue to attract interest in the scientific community and beyond.

The topics and issues discussed in computer ethics and the ethics of AI show a high level of consistency. Many of the discussions of computer ethics are continued or echoed in the ethics of AI. This includes questions of privacy and data protection, security, but also wider societal consequences of technical developments. At the same time, some topics are less visible, have morphed or moved into different discourses. The computer ethics discourse, for example, had a strong stream of discussion of ownership of data and computer code with a heavy emphasis on the communal nature of intellectual property. This discussion has changed deeply with some aspects appearing to be settled practice, such as ownership of content now administered through structures that are based on business models that emerged taking into account the competing views on intellectual property. Netflix, iTunes, etc. employ a distribution service and subscription model that appears to satisfy consumers, producers and intermediaries. Other aspects of ownership remain highly contested, such as the right to benefit from secondary use of process data, which underpins what Zuboff [126] calls surveillance capitalism.

2.2.3 Theoretical basis and reference disciplines

While there is a high level of continuity in terms of issues and topics, the theoretical positions vary greatly between computer ethics and the ethics of AI. This may have to do with the reference disciplines [11, 14, 78], i.e. the academic disciplines in which the contributors to the discourses were originally trained or from which they adopt theoretical positions they apply to computing and AI [85].

Both computer ethics and the ethics of AI are highly interdisciplinary and draw from a range of reference disciplines. In both cases there is a strong influence of philosophy, which is not surprising given that ethics is a discipline of philosophy. Similarly, there is a strong presence of contributors from technical disciplines. While the computer ethics discourse draws on contributions from computer scientists, the ethics of AI has attracted attention from more specialised communities that work on AI, notably at present the machine learning community. The most prominent manifestation of this is the FAT / FAccT community that focuses on fairness, accountability and transparency (https://facctconference.org/). There are also contributions from other academic fields, such as technology law, social sciences including science and technology studies. Some fields such as information systems are less visible than one could expect them to be in the current discourse [112].

While details of the disciplinary nature of the contributions to both discourses are difficult to assess, there are notable changes in the use of foundational concepts. In computer ethics, there is a strong emphasis on well-established ethical theories, notably duty-based theories [75, 76], theories focusing on consequences of actions [16, 89] as well as theories focusing on individual character and virtue [9],A. C. [83]. Ethical theorising has of course not been confined to these and there are examples of other ethical theories applied to computing, such as the ethics of care [4, 60], or discourse ethics [91]. In addition there have been proposals for ethical approaches uniquely suited to computing technologies, such as disclosive ethics [23, 70].

The ethics of AI discourse also uses a rich array of ethical theories [82], but it displays an undeniable focus on principle-based ethical guidelines [72]. This approach is dominant in biomedical ethics [37] and its adoption by the ethics of AI discourse may be explained by the well-established biomedical ethics procedures which promise practical ways of dealing with ethical issues, as well as an increasing interest of the biomedical ethics community in computing and AI technologies. However, it should be noted that this reliance on principalism [39] is contested within the biomedical community [79] and has been questioned in the AI field [67, 92], but at present remains dominant.

A further significant difference between computer ethics and the ethics of AI is that the latter has a much stronger emphasis on the law. One aspect of this legal emphasis is the recognition that many of the issues discussed in the ethics of AI are well-established issues of human rights, e.g. privacy or the avoidance of discrimination and physical harm. There are, thus, numerous vocal contributors to the discourse that emphasise human rights as a source of normativity in the ethics of AI as well as a way to address issues (Access [2, 43, 81, 96, 102]. This legal emphasis translates into a focus on legislation and regulation as a way of dealing with these issues, as discussed in the next section.

2.2.4 Solutions and mitigation

One can similarly observe some consistency and continuity but also some discontinuity with regards to proposals for addressing these issues. This is clearly a complex set of questions that depend on the issue in question and on the individual, group or organisation that is to deal with it. While it is, thus, not possible to provide a comprehensive overview of the different ways in which the issues can be resolved or mitigated, it is possible to highlight some differences between the two discourses [120].

One proposal that figured heavily in the computer ethics discourse that is less visible in the ethics of AI is that of professionalism. [8, 30, 74]. While it was and remains controversially discussed whether and to what degree computer experts are, should be or would want to be professionals, the idea of institutionalising professionalism as a way to deal with ethical issues has driven the development of organisations that portray themselves as professional bodies for computing [24, 62]. The uncertain status of computing as a profession is reflected by the status of AI, which is probably at best be regarded as a sub-profession.

Both discourses underline the importance of knowledge, learning and education as conditions of successfully navigating ethical questions [20]. Both ask the question which help can be provided to people working in the design and development of technology and aim to develop suitable methodologies [68]. This is the basis of various "by design" approaches [33, 64, 86] that are based on the principles of value-sensitive design [58], [85]. Specific methodologies for incorporating ethical considerations in organisational practice can be found both in the computer ethics debate [63, 66] as well as the ethics of AI discourse [7, 45, 48].

One area where the ethics of AI debate appears to be much more visible and impactful than computer ethics is that of legislation and regulation. This does not imply that the ethics of AI has a greater fundamental affinity to legislation, but it is based on the empirical observation that ethical (and other) issues of AI are perceived to be in need of legislation due to their potential impact (see next section). Rodrigues [104] provides an overview of recent legislative agendas. The most prominent example is probably the European Commission's proposed Regulation for AI (European [50] which would bring in sweeping changes to the AI field, mostly based on earlier ethical discussion. In addition to proposed legislation in various jurisdictions, there are proposals for the creation of new regulatory bodies [44], European [51] and international structures to govern AI [71, 119]. It is probably not surprising that some actors in the AI field actively campaign against legislation and industry associations such as the Partnership on AI but also company codes of conduct, etc. can be seen as ways of heading off legislation.

Computer ethics, on the other hand, also touched on and influenced legislative processes concerning topics in its field of interest, notably data protection and intellectual property. However, the attention paid to AI by legislators is much higher than it ever was to computers in general.

2.2.5 Importance and impact

One reason for the high prevalence of legislation and regulation with regards to AI is the apparent importance and impact of the technology. AI is generally described as having unprecedented impact on most aspects of life which calls for ethical attention. Notwithstanding the accuracy of this narrative, it is broadly accepted across academia, policy and broader societal discourse. It is also the mostly unquestioned driver for the engagement with ethics. Questions about the nature of AI, its characteristics, and its likely and certain consequences are dealt with under the implicit assumption that they must be dealt with due to the importance of the technology.

The computer ethics debate does not share this unquestioned assumption of the importance of its subject matter. In fact, it was a recurrent theme of computer ethics to ask whether it was needed at all [57], H. [116]. This is, of course, a reasonable question to ask. There are a number of fields of applied ethics, e.g. medical ethics, business ethics or environmental ethics. But there are few, if any, that focus on a particular artefact, such as a computer. So, why would computer ethics be called for. Moor [93] famously proposed that it is the logical malleability, the fact that intended uses are not even foreseen by the designer, that sets computers apart from other artefacts, such as cars or airplanes. This remains a strong argument that also applies to current AI. With the growing spread of computers, first in organisations, then through personal and mobile computing which facilitated everyday application including electronic commerce and social media, computer ethics could point to the undeniable impact of computing technology which paved the way for the now ubiquitous reference to the impact of AI.

3 Towards an ethics of digital ecosystems

So far, this article has suggested that computer ethics and the ethics of AI can be read as two related, but distinct discourses and it has endeavoured to elucidate the differences and similarities between these two. While this should have convinced the reader that such a distinction is possible and helpful in understanding both discourses, it is also clear that other interpretations are possible. The ethics of AI can be seen as a continuation of the computer ethics discourse that has attracted new participants and led to a shift of topics, positions and impact. Both interpretations allow for a critical analysis of both discourses with a view to identifying their shared strengths and weaknesses and an exploration of what can be learned from them that can prepare the next discourse that can be expected to arise.

This question is motivated by the assumption that the ethics of AI discourse is not the final step in the discussion. AI is many things, but it is also currently a hype and an academic fashion. This is not to deny its importance but a recognition that academia, like policy and general discussions follow the technology hype cycle [52] and attention to technologies, management models and research approaches have characteristics of fashion cycles [1, 12]. It is, therefore, reasonable to expect that the current focus on AI will peak and be replaced by another topic of debate. The purpose of this section is to discuss what may emerge from and follow the ethics of AI discourse and how this next stage of the debate can best profit from insights generated by the computer ethics and the ethics of AI discourses.

The term "computer ethics" lost some of its appeal when computing technologies became pervasive and integrated into many other devices. When a computer is in every phone, car and even most washing machines and refrigerators, then the term "computer ethics" becomes too fuzzy to be useful. A similar fate is likely to befall AI, or may already have done so. On the one hand, "AI" as a term is already too broad, as it covers everything from specific machine learning techniques to fictional artificial general intelligence. On the other hand, it is too narrow, given that it excludes many of the current and emerging technologies that anchor part of its future impact, such as quantum computing, neuromorphic technologies, the Internet of Things, edge computing, etc. And we can of course expect new technologies and terminology to emerge to add to this complexity.

One weakness that both computer ethics and the ethics of AI share is their apparent focus on a particular piece of technology. Ethical, social, human rights and other issues never arise from a technology per se, however, but result from the use of technologies by humans in societal, organisational and other setting. This is not to suggest that technologies are value neutral, but that the affordances that they possess [59, 100] can play out differently in different environments.

To prepare for the next wave of ethics of technology discussion that will succeed the ethics of AI, it may, therefore, be advisable to take a slightly different perspective, one that reduces the focus on particular technologies. One family of such perspectives are based on systems theory [99]. There are a number of such theories that have been applied to computing technologies, such as complex adaptive systems [90] or soft systems [35, 36].

A possible use of the systems concept to understand the way technology and social environments interact is that of an ecosystem. The metaphor of ecosystems to describe AI and its broader social and ethical consequences has already been employed widely by scholars [53] as well as policymakers. The European Commission, for example, in its White Paper (European [49] that prepared the proposed Regulation (European [50] framed European AI policy in terms of an ecosystem of excellence and an ecosystem of trust, with the latter representing ethical, social and legal concerns. The OECD [101] similarly proposes the development of a digital ecosystem for AI. The World Economic Forum [125] underlines the logic of this terminology when it emphasises the importance of a systems-wide approach, if responses to the ethics of AI are to be successful.

From a scholarly perspective, it is interesting to observe that a body of research has developed since the mid-1990s that uses the concept of an ecosystem to describe how technologies are used in the economic system [5, 61, 95]. This discourse is of interest to this paper because it has developed a rich set of theoretical positions, substantive insights and methodologies that can be used to understand specific socio-technical systems. At the same time, there has been very little emphasis in this discourse on the ethical and normative aspects of these ecosystems. This paper does not offer the space to pursue this argument in more detail, but can suggest that combining these different perspectives and looking at the ethics of digital (innovation) ecosystems can provide a helpful new perspective.

The benefit of using such a digital ecosystems-based approach is that it moves away from a particular technology and opens the view to the way in which technical developments interact with social developments, which broadens the view to encompass application areas, social structures societal environments as well as technical affordances. Actual ethical concerns are affected by all of these different factors and the dynamics of their relationships.

The proposal arising from this insight is, thus, that, to prepare the next wave of ethics and technology discussion, the focus should not be on predicting the next big technology, but rather to explore how ethical issues arise in socio-technical (innovation) ecosystems. This is a perspective that can be employed right now and used to better understand the ethics of AI or computing more generally. It invites detailed empirical observations of the social realities of the development, deployment and use of current and past technology. It is similarly open to sophisticated ethical and theoretical investigations. This understanding can then be the baseline for exploring consequences of technological and social change. Making use of this perspective for the current ethics of AI debate would have the great benefit that the question of adequately defining AI loses its urgency. The key question then becomes how socio-technical innovation ecosystems develop, which is a question that is open to the inclusion of other types of technology from quantum computing to well-established computational and other technological artefacts.

Taking this perspective, which might be called the "ethics of digital ecosystems" moves beyond individual technologies and allows keeping track of and continuing established ethical discussions. An ethical analysis of digital ecosystems will need to delineate the systems which will be required to determine the capabilities of these ecosystems. The capabilities, in turn will be what gives rise to possible social applications and the resulting benefits and concerns. Whatever the next technological hype will be, it is a safe bet that it will continue at least some trends from the past and that the corresponding ethical debates will remain valid. For example, it is plausible that future digital technologies will make use of, analyse and produce personal data, hence continuing the need for considerations of privacy and data protection. Security, safety and reliability of any future socio-technical system are similarly a good bet in terms of future relevance.

The focus on broader innovation ecosystem furthermore means that many of the currently discussed topics can be better framed as relevant topics of discussion. Questions of political participation, economic justice or human autonomy are much more easily understood as aspects of socio-technical systems than as intrinsically linked to a particular technology. The change of perspective towards digital ecosystems can, thus, strengthen the plausibility and relevance of some of the current topics of debate.

The same can be said for the discussion of possible mitigations. By focusing on digital innovation ecosystems, the breadth of possible mitigation strategy automatically increases. In computer ethics or ethics of AI, the focus is on technical artefacts and there is a temptation to link ethical issues as well as responses to these issues to the artefacts themselves. This is where approaches such as value-sensitive design or ethics by design derive their legitimacy from. The move away from the artefact focus towards the socio-technical ecosystem does not invalidate such approaches but clearly shows that the broader context needs to be included, thus opening up the discussion for regulation, legislation, democratic participation, societal debate as means to shape innovation ecosystems.

The move beyond the ethics of AI towards an ethics of digital innovation ecosystems will further broaden the disciplines and stakeholder groups involved in the discussion. Those groups who have undertaken research on computer ethics will remain important, as will the additional groups that have developed or have moved to exploring the ethics of AI. However, the move towards digital innovation ecosystems makes it clear that additional perspectives will be required to get a full understanding of potential problems and solutions. Innovation ecosystem research is done in fields like business studies and information systems, which have much to contribute, but have traditionally had limited visibility in computer ethics and ethics of AI. Such a broadening of the disciplines and fields involved suggests that the range of theoretical perspectives is also likely to increase. Traditional theories of philosophical ethics will doubtlessly remain relevant and the focus on mid-level principles that the ethics of AI has promoted are similarly likely to remain important for guiding ethical reflection. However, a broader range of theories is likely to be applied, including systems theories, theories from business and organisational studies as well as the literature on innovation ecosystems.

4 Conclusion

This paper started from the intuition that there is a noticeable difference between the discourses on computer ethics and the ethics of AI. It explored this difference with a view to examining how understanding it can help us prepare for the inevitable next discourse, which will follow the current discussion of ethics of AI. The analysis of the two discourses has outlined that there are notable differences in terms of scope of the discussion, topics and issues, theoretical basis and reference disciplines, solutions and mitigations and expected impacts. It is, thus, legitimate to draw a dividing line between the two discourses. However, it has also become clear that there is much continuity and overlap and to a significant degree, the ethics of AI discourse is a continuation and extension of the computer ethics discourse. This part of the analysis presented in the paper should help participants in both discourses to more clearly see similarities and discontinuities and appreciate where research has already been done that can benefit the respective other discourse.

The exact insights to be gained from the review of the two discourses clearly depends on the prior knowledge of the observer. Individuals who are intimately familiar with both discourses may be aware of all the various angles. However, participants of the computer ethics discourse who have not followed the ethics of AI debate can find insights with regards to current topics and issues, e.g. the broader socio-economic debates that surround AI. They can similarly benefit from an understanding of how biomedical principalism is being applied to AI, which may offer avenues of impact, solutions and mitigations that computer ethics tended to struggle with. Similarly a new entrant to the ethics of AI debate may benefit from an appreciation of computer ethics by realising that many of the topics have a decade long history that there are numerous ethical positions and mitigation structures that have been well established and do not need to be reinvented.

Following from these insights, the paper then moved to the question what the next discourse is likely to be. This part of the paper is driven by the recognition that the emphasis on a particular technology or family of technologies, be this computers or AI, is not particularly helpful. Technologies unfold their ethical benefits and problems when deployed and used in the context of socio-technical systems. It is less the affordances of a technology per se, but the way in which those affordances evolve in practical contexts that are of interest to ethical reflection. There are numerous ways in which these socio-technical systems can be described, and this paper has proposed that the concept of innovation ecosystems may offer one suitable approach.

The outcome of the paper is, thus, the suggestion to start to prepare the discourse of the ethics of digital innovation ecosystems. This will again be a somewhat different discourse from the ones on computer ethics and the ethics of AI, but can also count as a continuation of the former two. The shift of the topic from computing or AI gives this discourse the flexibility to accommodate existing and emerging technologies from quantum computing to IoT without requiring a major shift of the debate. Maybe more importantly, it will require a more focused attention to the social side of innovation ecosystems, which means that aspects like application area and the local and cultural context of use will figure prominently.

By calling for this shift of the debate, the paper provides the basis for such a shift and can help shape current debates in this direction. This is particularly necessary with regards to the ethics of AI, which otherwise may be locked into mitigation strategies ranging from legislation and regulation to standardisation and organisational practice with a focus on the concept of AI, which may misdirect efforts away from the areas of greatest need.

This shift of the debate and the attention to the ethics of innovation ecosystem will not be a panacea. The need for a delimitation of the subject of debate will remain, which means that the exact content or membership of an innovation ecosystem that raises ethical questions will remain. Systems-based approaches raise questions of individual agency and the locus of ethics, which the dominant ethical theories may find difficult to answer. The innovation ecosystems construct is also just an umbrella term underneath which there will be many specific innovation ecosystems, which means that the attention to the empirical realisation of such systems will need to grow.

Despite the fact that this shift of the debate will require significant additional efforts, it is still worth considering. The currently ubiquitous discussion of the ethics of AI will continue for the foreseeable future. At the same time it is already visibly reaching its limitations, for example by including numerous ethical issues that are not unique to AI. In order for the discussion to remain specific and allow the flexibility to react to future developments, it will need to reconsider its underpinnings. This paper suggests that this can be achieved by refocusing its scope and explicitly embracing digital innovation ecosystems as the subject of ethical reflection. Doing so will ensure that many of the lessons that have been learned over years and decades of working on the ethics of computing and AI will remain present and relevant, and that there is a well-established starting point from which we can engage with the next generations of digital technologies to ensure that their creation and use benefit humanity.