This chapter analyses the concept of AI ecosystems with a view to identifying how the ecosystem metaphor can help deal with ethical questions. The first step is to introduce in more detail the concept of ecosystems , drawing specifically on the literature on innovation ecosystems. This allows the identification of characteristics of ecosystems such as their openness, the co-evolution and mutual learning of their members, and the interdependence and complex relationship between those members. These characteristics underlie the challenges that an ethics-driven approach to ecosystems must consider.
- AI ecosystems
- Innovation ecosystems
- Ethics of ecosystems
Stakeholders in AI are numerous and ethical issues are complex. I propose looking at AI from a systems perspective, specifically employing the idea of an AI ecosystem . This chapter will prepare the groundwork for my later recommendations by giving the necessary background on ecosystems.
6.1 An Ecosystems View of AI
Speaking of AI in terms of ecosystems is by now well established. The EC’s White Paper sees AI as composed of an ecosystem of excellence and an ecosystem of trust (European Commission 2020a). The OECD recommends that national policymakers foster a digital ecosystem for AI (OECD 2019: 3). Charlotte Stix (n.d.) has used the term to cover EU AI policy. The UK’s Digital Catapult (2020) has used the term to refer to AI ethics , to explore which practical lessons can be learned. The first draft of UNESCO’s recommendation on the ethics of artificial intelligence suggests that “Member States should foster the development of, and access to, a digital ecosystem for ethical AI” (UNESCO 2020).
To use the ecosystem metaphor productively and examine how it can promote our understanding of AI and allow us to deduce recommendations on how such an ecosystem could be shaped, we need to look at the history of the use of this metaphor , the purposes for which it has been developed and the limitations that it may have.
6.1.1 AI Innovation Ecosystems
The use of terms such as “innovation ecosystem ” is relatively widespread in innovation management and related fields and relates only vaguely to the original use of the term in biology (see box).
The term ecosystem originally stems from biology. According to National Geographic (Rutledge et al. 2011) “an ecosystem is a geographic area where plants, animals, and other organisms, as well as weather and landscapes, work together to form a bubble of life”. Outside of biology, ecosystems are regarded as complex, interconnected networks of individual components ranging from the “U.S. television ecosystem ” (Ansari et al. 2016) to “ecosystem service assessments [for] mental health” (Bratman et al. 2019) to conclusions that “1.4 million songs are inspired by ecosystems” (Coscieme 2015). It is a popular concept which suggests that the components of the system are a living organism.
One can distinguish several perspectives that are used in the innovation ecosystems literature. The dominant one is the organisational perspective, where researchers employ the term to better understand how organisations can gain a competitive advantage within a system. Viewing the organisation as part of an innovation ecosystem can provide insights into opportunities for growth (Adner 2006). The ecosystems perspective helps organisations understand that they can shape the ecosystem they are part of, but that the overall innovation is at least partly a function of the surrounding ecosystem (Nylund et al. 2019). Recognising this potential, organisations can use the ecosystems view to develop their strategy (Moore 1993) in general, with a particular focus on their innovation management activities (Ritala and Almpanopoulou 2017). One example of a question that the use of the ecosystem metaphor can help to answer is: how and why do organisations become more or less successful? Moore (1993) uses the example of IBM in the context of ecosystems . IBM was one of the most successful members of the new business community or ecosystem based on personal computers. It dominated this system for a while, but then became less profitable and lost its leadership of the market.
While this functional use of the ecosystem metaphor appears to be the dominant one in the fields of business and organisation studies, it is not the only one. An ecosystems perspective can also be used as a theoretical perspective that allows deeper insights into the behaviour of members of the ecosystem more generally. The ecosystem lens can equally be employed at the social level, for example by the social sciences to interpret the global economy as a living organism with a view to understanding its workings (Gomes et al. 2018).
From the very beginning of the innovation ecosystems literature (Moore 1993), it has been suggested that society can employ this perspective to provide an environment in which ecosystems can thrive. Another suggestion is that the ecosystems terminology can be used to improve the performance of entire innovation ecosystems (Pombo-Juárez et al. 2017). It has also long been recognised that it may be appropriate for entire ecosystems to perish in the interests of society as a whole (Moore 1993).
But what counts as an innovation ecosystem? I use this term broadly, ignoring the conceptual nuances that distinguish between terms such as “business ecosystem ” (Gomes et al. 2018), “digital ecosystem” (Senyo et al. 2019), “digital business ecosystem ” and “knowledge ecosystem ” (Gomes et al. 2018), and further related ideas such as value chains. These distinctions may be valuable for specific purposes, but for the use of the ecosystems concept to develop normative insights into AI, they are of secondary relevance.
More interesting are the characteristics that the various types of ecosystem display. A key characteristic is that ecosystems are the place where evolution occurs. Darwinian evolution is widely accepted in both natural and social sciences as a theory that explains change (Porra 1999). This renders evolutionary theory attractive to fast-moving fields such as innovation. Moore’s (1993) seminal article proposing the application of the ecosystem metaphor to socio-technical systems proposes four stages of development of ecosystems: birth, expansion, leadership and self-renewal or death. The adoption of Darwinian evolutionary theory and the application of evolutionary principles to socio-technical systems is contested and ethically problematic, as I will show below. However, it appears to be highly attractive as a general theory of change.
In addition to explaining change, the ecosystems lens can help explain interdependencies between actors and why they develop together or co-evolve (Ritala and Almpanopoulou 2017). This co-evolution of interdependent actors explains why they have to compete and cooperate. Innovation ecosystems, like natural ecosystems , are open systems where new actors can emerge and incumbent ones need to react accordingly. Ecosystems can also be visualised as interdependent and interconnected networks in which mutual learning can occur (Nylund et al. 2019). These networks often have one central node (Gobble 2014), which may be what Gomes et al. (2018) call a “keystone leader”, i.e. a dominant organisation. In the case of current innovation ecosystems they are often organised around a technology platform.
These insights that are generated using the ecosystem metaphor imply that it is clear where the boundaries of an ecosystem are. Pombo-Juárez et al. (2017) suggest that ecosystems consist of four layers – individuals, organisations, innovation systems and landscapes – which seems to suggest that it is possible to delineate which landscapes with their inhabitants constitute a particular ecosystem .
Examples of innovation ecosystems can be found at different levels of size and complexity. They can be geographically constrained, as in the case of cities (Nylund et al. 2019) or national innovation ecosystems (Gomes et al. 2018), but they can also be organised by other criteria. Adner (2006) gives the example of industries such as commercial printing, financial services, basic materials and logistics provision.
Of interest to this book is that innovation ecosystems are generally recognised to be subject to intervention and change. These interventions can be catalysed by members of the ecosystem , normally organisations, but also individual or non-organisational collective actors. They can also be triggered by actors who are involved in, support or contribute to the innovation ecosystem , but do not necessarily play a part as members of the ecosystem . For instance, a regional ecosystem may be influenced by a national actor, or an ecosystem of AI may be influenced by non-AI technical developments in, say, quantum computing. Innovation ecosystems are notably different from natural ecosystems in that they have an ability to reflect on their status and think about the future (Pombo-Juárez et al. 2017), with a view to changing and improving the situation.
Figure 6.1 summarises some of the key characteristics of innovation ecosystems that render them an interesting metaphor to describe an ensemble of socio-technical actors, including the AI landscape. The representation in the form of overlapping ellipses symbolises that these characteristics are not independent but influence one another.
It is easy to see why the ecosystem metaphor is applied liberally to AI. There are many different actors and stakeholders involved. These interact in complex ways with consequences that are difficult to predict. They are all mutually dependent, even though the disappearance of any one of them will not necessarily damage the overall system. They co-evolve and try to prosper.
Despite these advantages, there are significant drawbacks to the application of the concept of ecosystems to socio-technical systems . Oh et al. (2016) call it a flawed analogy. They point to the fact that, unlike natural ecosystems , innovation ecosystems are not themselves the outcome of evolutionary processes but are intentionally designed. They are concerned that an analogy that does not rest on rigorous conceptual and empirical analysis may preclude more detailed research and policy around innovation. Describing a social system in terms of a natural system furthermore leads to potential conceptual pitfalls. The heavy emphasis on evolutionary processes of selection can lead to an implied technological determinism. This means that the technology in question is seen as an exogenous and autonomous development that is inevitable and forces individuals and organisations to adapt (Grint and Woolgar 1997). Equally, it is problematic that the competitive struggle for survival implied in evolution would then apply not only to organisations but also potentially to cultures where only those who are adapted to the technology survive (Paterson 2007).
There is a well-established link between Darwinism and capitalism (Rauch 1993), with Charles Darwin himself having freely admitted that his theory of evolution was inspired by the classical economists of the 17th and 18th century who focused on the principle of competitive individualism (Priest 2017). Hawkes (2003: 134) therefore goes so far as to call Darwin’s theory of evolution a “textbook example of the Marxist theory of ideology in practice”, with ideology being a ruling idea of the ruling classes (Shaw 1989).
The innovation ecosystem metaphor can serve ideological purposes by naturalising and thus hiding the fact that the capitalist means of production that these innovation systems are typically based on are the result of historical struggles and political processes that could well have led to other outcomes. To make matters worse, Darwinian ideas developed to describe the natural world have a history of being adopted to legitimate the outcomes of evolution, even where this is arguably inappropriate. This phenomenon is called social Darwinism (Crook 1996). Social Darwinism not only explains social change but can also be used to justify this change. This is a scientifically problematic category mistake which can also have morally outrageous consequences. At its worst, German Nazis adopted social Darwinism in their fantasy that the Aryan “race” was superior to others and needed to preserve its gene pool (Weikart 2013).
The ecosystem metaphor employed here to help explain, understand and guide the field of AI is thus very explicitly not harmless or neutral. The attempt to apply ethical thinking to it raises some further significant issues that I will discuss in more detail below. However, I hope that highlighting these pitfalls will make it possible to avoid them. To emphasise the point, I do not use the concept of ecosystem as a scientific description, but only as a metaphor , that is, a figure of speech and a symbolic representation.
The benefit of this use of the metaphor is that it helps to highlight some key features of AI that contribute to ethical issues and affect possible mitigations. I will use the metaphor later to derive some requirements for possible solutions which I hope will allow a more cohesive approach to mitigating ethical risks .
Before we move on to the ethics of ecosystems and the question of how such ecosystems can be shaped, it is important to describe relevant parts of the AI innovation ecosystem. Figure 6.2 provides a systems view of the ecosystem focusing on one particular mitigation strategy , namely the requirement to provide algorithmic impact assessments (see box).
Algorithmic Impact Assessments
Algorithms can be part of systems which make decisions. Algorithmic decision systems (ADS) “rely on the analysis of large amounts of personal data to infer correlations … [and] derive information deemed useful to make decisions”. Decisions made by an ADS can be wrong (Oswald et al. 2018). To minimise this risk, algorithmic impact assessments are designed to reduce the risks of bias, discrimination and wrong decision-making.
The introduction of algorithmic impact assessments is one of the numerous proposals for addressing ethical issues in AI that were introduced in Chapter 5. The figure captures the key stakeholders and processes that would be involved in implementing such a proposal. What the example shows is that the characteristics of innovation ecosystems depicted in Figure 6.1, such as non-linearity, interdependence, openness and unclear boundaries, are easily identifiable in practice. This lends credence to my thesis that the innovation ecosystem metaphor is helpful for understanding AI. If this is so, then it is worth taking the next step and thinking about what the metaphor can teach us about intervening in the ecosystem and how we can use it to promote human flourishing.
6.2 Ethics of and in (Eco)Systems
What I have shown so far is that the metaphor of ecosystems is useful because it speaks to an audience of business and political decision-makers. A more general reference to systems theories may help audiences of other types understand the complexity and interdependence of the various human and non-human actors that make up the field of AI. What I have not shown is that the metaphor of ecosystems is useful in understanding AI ethics and deriving recommendations for handling AI ethics dilemmas, the ultimate aim of this book. One could object that (eco)systems as phenomena of the natural world can be interpreted as being categorically separate from ethics .
It is a long-established position in philosophical ethics that normative statements (statements that can be expressed using the term “ought”) cannot be reduced to or fully deduced from descriptive statements. There is some plausibility to this when applied to ecosystems . Natural ecosystems are the result of evolution and predate humans and human ethical analysis. What individual members of ecosystems do and how ecosystems develop is thus not subject to ethics . If we think of an ecosystem without humans, say a palaeontological ecosystem in the Jurassic period, it is easy to see that an ethical analysis would be difficult and probably meaningless.
However, the relationship between “is” statements and “ought” statements is more complex than this, and numerous voices suggest that descriptive and normative statements at least need to inform one another (Magnani 2007). For instance, “ought implies can” is famously ascribed to Immanuel Kant (Kohl 2015) and links descriptive properties to normative properties. What a human being can do is descriptive, and if – as any reasonable person would agree – this needs to inform what ought to be done, we have a link. There is also a long tradition of philosophy that derives normative statements from descriptive ones or that derives moral obligations from some sort of being (Floridi 2010).
A different type of argument that seems to preclude the deduction of ethical insights from systems descriptions of reality can be observed in particular streams of systems theory. The epitome of this type of systems thinking is represented by the sociologist Niklas Luhmann (1987), who developed a systems theory based on autopoietic systems. These are systems whose primary purpose is to reproduce themselves. One can interpret biological systems in this way, but Luhmann’s focus is on social systems, which follow a particular internal logic and integrate environmental input to maintain the integrity of the system (Introna 1997). The economic system, for example, works on a logic of payments to generate further payments. Raising questions of ethics in such a system either is a mistake of category or will lead to the translation of ethics into payment-related aspects, which is likely to be inappropriate in itself.
The relationship between ethics and ecosystems is thus not straightforward. When deducing ethical statements from a systems description, one should take care to be explicit about the assumptions that support the deduction. The challenge of moving from “is” to “ought” should be addressed. That is a general challenge of this book. The astute reader will notice that I switched between different concepts of ethics in earlier chapters. I introduced the concept of human flourishing as part of a normative discussion of ethics and concluded that promoting human flourishing ought to be the purpose of AI. In my later discussion of the ethical issues of AI, I took a more descriptive stance, simply accepting as an ethical issue those social facts that people perceive to be ethical issues . This does raise fundamental questions about how we can move from a perception of something as ethically problematic to the normative statement that something should be done, which normally implies an obligation on someone to do something.
Using the ecosystem metaphor to describe ethical issues in AI remains in this tension between “is” and “ought”. A description of the ecosystem does not in and of itself provide the basis for normative suggestions on how to deal with it. While I recognise these conceptual issues, I do not view them as insurmountable. Ethical pronouncements do not directly follow from the ecosystem perspective, but ethical pronouncements without a good understanding of the real-life issues, which the ecosystem metaphor provides, would not be useful either.
In public discourse we can observe many examples of normative positions that refer to, for example, natural ecosystems . Saving the environment in general but also safeguarding particular natural ecosystems is widely recognised as not just a possible choice, but a moral obligation. This can be based on a number of normative premises (e.g. one ought to preserve anything living or anything capable of suffering; it is our duty to safeguard an environment fit for human habitation and future generations; one should preserve God’s creation; etc.) which are often not explicitly spelled out, but seem to have strong support.
In addition, I tried to provide the normative foundation of the book earlier by drawing on the ancient tradition of human flourishing, which is closely linked to the question of the good life. Our lives take place in natural, social and technical ecosystems , which have a strong bearing on our ability to live well. Drawing on Paul Ricoeur (1999: 256), I suggest that the ethical aim that can motivate the use of technology is the aim of the good life, lived with and for others in just institutions. It is these components that allow for human flourishing, and they motivate and provide the normative underpinnings of the AI-related recommendations that I develop below.
Adner R (2006) Match your innovation strategy to your innovation ecosystem. Harv Bus Rev 84:98–107, 148. https://hbr.org/2006/04/match-your-innovation-strategy-to-your-innovation-ecosystem. Accessed 12 Oct 2020
Ansari S, Garud R, Kumaraswamy A (2016) The disruptor’s dilemma: TiVo and the U.S. television ecosystem. Strateg Manag J 37:1829–1853. https://doi.org/10.1002/smj.2442
Bratman GN, Anderson CB, Berman MG et al (2019) Nature and mental health: an ecosystem service perspective. Sci Adv 5. https://doi.org/10.1126/sciadv.aax0903
Castelluccia C, Le Métayer D (2019) Understanding algorithmic decision-making: opportunities and challenges. Scientific Foresight Unit, European Parliamentary Research Service, Brussels. https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624261/EPRS_STU(2019)624261_EN.pdf. Accessed 12 Oct 2020
Coscieme L (2015) Cultural ecosystem services: the inspirational value of ecosystems in popular music. Ecosyst Serv 16:121–124. https://doi.org/10.1016/j.ecoser.2015.10.024
Crook P (1996) Social Darwinism: the concept. Hist Eur Ideas 22(4):261–274. https://doi.org/10.1016/S0191-6599(96)00005-8
Digital Catapult (2020) Lessons in practical AI ethics: taking the UK’s AI ecosystem from “what” to “how”. Digital Catapult, London. https://connectedeverythingmedia.files.wordpress.com/2020/05/20200430_dc_143_ethicspaper.pdf. Accessed 12 Oct 2020
European Commission (2020a) White paper on artificial intelligence: a European approach to excellence and trust. European Commission, Brussels. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf. Accessed 22 Sept 2020
European Commission (2020b) Shaping Europe’s digital future. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. European Commission, Brussels. https://ec.europa.eu/info/sites/info/files/communication-shaping-europes-digital-future-feb2020_en_3.pdf. Accessed 6 Oct 2020
Floridi L (2010) Information ethics. In: Floridi L (ed) The Cambridge handbook of information and computer ethics. Cambridge University Press, UK, pp 77–97
Gobble MM (2014) Charting the innovation ecosystem. Res Technol Manage 57:55–59. https://www.tandfonline.com/doi/abs/10.5437/08956308X5704005
Gomes LA de V, Facin ALF, Salerno MS, Ikenami RK (2018) Unpacking the innovation ecosystem construct: evolution, gaps and trends. Technol Forecast Soc Change 136:30–48. https://doi.org/10.1016/j.techfore.2016.11.009
Grint K, Woolgar S (1997) The machine at work: technology, work and organization. Polity Press, Cambridge
Hawkes D (2003) Ideology, 2nd edn. Routledge, London
Introna LD (1997) Management, information and power: a narrative of the involved manager. Palgrave Macmillan, London
Kohl M (2015) Kant and “ought implies can”. Philos Q 65:690–710. https://doi.org/10.1093/pq/pqv044
Luhmann N (1987) Soziale Systeme: Grundriß einer allgemeinen Theorie, 1st edn. Suhrkamp Verlag, Frankfurt am Main
Magnani L (2007) Morality in a technological world: knowledge as duty. Cambridge University Press, Cambridge, UK
Moore JF (1993) Predators and prey: a new ecology of competition. Harv Bus Rev 71:75–86. https://hbr.org/1993/05/predators-and-prey-a-new-ecology-of-competition. Accessed 12 Oct 2020
Nylund PA, Ferras-Hernandez X, Brem A (2019) Strategies for activating innovation ecosystems: introduction of a taxonomy. IEEE Eng Manage Rev 47:60–66. https://doi.org/10.1109/EMR.2019.2931696
OECD (2019) Recommendation of the council on artificial intelligence. OECD/LEGAL/0449. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449. Accessed 12 Oct 2020
Oh D-S, Phillips F, Park S, Lee E (2016) Innovation ecosystems: a critical examination. Technovation 54:1–6. https://doi.org/10.1016/j.technovation.2016.02.004
Oswald M, Grace J, Urwin S, Barnes G (2018) Algorithmic risk assessment policing models: lessons from the Durham HART model and ‘experimental’ proportionality. Inf Commun Technol Law 27:223–250. https://doi.org/10.1080/13600834.2018.1458455
Paterson B (2007) We cannot eat data: the need for computer ethics to address the cultural and ecological impacts of computing. In: Hongladarom S, Ess C (eds) Information technology ethics: cultural perspectives. Idea Group Reference, Hershey PA, pp 153–168
Pombo-Juárez L, Könnölä T, Miles I et al (2017) Wiring up multiple layers of innovation ecosystems: contemplations from Personal Health Systems Foresight. Technol Forecast Soc Change 115:278–288. https://doi.org/10.1016/j.techfore.2016.04.018
Porra J (1999) Colonial systems. Inf Syst Res 10:38–69. https://doi.org/10.1287/isre.10.1.38. Accessed 12 Oct. 2020
Priest G (2017) Charles Darwin’s theory of moral sentiments: what Darwin’s ethics really Owes to Adam Smith. J Hist Ideas 78(4):571–593. https://doi.org/10.1353/jhi.2017.0032. Accessed 12 Oct 2020
Rauch J (1993) Kindly inquisitors: the new attacks on free thought. University of Chicago Press, Chicago IL
Ricoeur P (1999) Lectures, vol 1: autour du politique. Points, Paris
Ritala P, Almpanopoulou A (2017) In defense of ‘eco’ in innovation ecosystem. Technovation 60:39–42. https://doi.org/10.1016/j.technovation.2017.01.004. Accessed 12 Oct 2020
Rutledge K, Ramroop T, Boudreau D (2011) Ecosystem. National Geographic. https://www.nationalgeographic.org/encyclopedia/ecosystem/. Accessed 12 Oct 2020
Senyo PK, Liu K, Effah J (2019) Digital business ecosystem: literature review and a framework for future research. Int J Inf Manage 47:52–64. https://doi.org/10.1016/j.ijinfomgt.2019.01.002
Shaw W (1989) Ruling ideas. Can. J Philos 15(Suppl 1):425–448. https://doi.org/10.1080/00455091.1989.10716806
Stix C (n.d.) Writing. https://www.charlottestix.com/european-union-ai-ecosystem. Accessed 22 June 2020
UNESCO (2020) First draft text of the recommendation on the ethics of artificial intelligence. Ad hoc expert group (AHEG) for the preparation of a draft text, UNESCO, Paris. https://unesdoc.unesco.org/ark:/48223/pf0000373434. Accessed 12 Oct 2020
Weikart R (2013) The role of Darwinism in Nazi racial thought. Ger Stud Rev 36:537–556. https://doi.org/10.1353/gsr.2013.0106
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
© 2021 The Author(s)
About this chapter
Cite this chapter
Stahl, B.C. (2021). AI Ecosystems for Human Flourishing: The Background. In: Artificial Intelligence for a Better Future. SpringerBriefs in Research and Innovation Governance. Springer, Cham. https://doi.org/10.1007/978-3-030-69978-9_6
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-69977-2
Online ISBN: 978-3-030-69978-9
eBook Packages: Computer ScienceComputer Science (R0)