Artificial intelligence can be a game changer to address the global challenge of humanity-threatening climate change by fostering sustainable development. Since chemical research and development lay the foundation for innovative products and solutions, this study presents a novel chemical research and development process backed with artificial intelligence and guiding ethical principles to account for both process- and outcome-related sustainability. Particularly in ethically salient contexts, ethical principles have to accompany research and development powered by artificial intelligence to promote social and environmental good and sustainability (beneficence) while preventing any harm (non-maleficence) for all stakeholders (i.e., companies, individuals, society at large) affected.
Artificial intelligence (AI) is a transformative power (re-)shaping businesses by providing new solutions to complex problems, increasing consistency and reliability, while decreasing costs and risks (Taddeo & Floridi, 2018). The “general purpose technology” character of AI can lay the foundation for innovations and capabilities (Brynjolfsson & Mitchell, 2017). Thus, AI has the potential to establish and bolster sustainable business models (e.g., Di Vaio et al., 2020; Mabkhot et al., 2021) and address major societal issues, among them, sustainable development as a focal challenge and objective of our time (Nishant et al., 2020; Vinuesa et al., 2020). AI approaches such as machine learning (ML) and deep learning facilitate the processing and analysis of massive amounts of structured and unstructured data (Jordan & Mitchell, 2015), which particularly benefits data-intensive research and applications. As chemical research has generated its insights from data from the very beginning (Gasteiger, 2020), AI is increasingly applied in various fields of chemical research including (synthetic) organic chemistry (e.g., de Almeida et al., 2019; Wei et al., 2016), toxicity prediction (e.g., Idakwo et al., 2019; Vo et al., 2020), quantum chemistry (e.g., Dral, 2020), (nano-)material science (e.g., Muratov et al., 2020), molecular design (e.g., Button et al., 2019), and drug discovery and design (e.g., Jiménez-Luna et al., 2020; Schneider, 2018, 2019; Zhang et al., 2017). AI in chemical research and development (R&D) can foster environmental and social good and embrace sustainability on two counts, that is, by developing more sustainable and ecofriendly substances and products on the one hand and by incorporating resource-efficient and sustainability-oriented methods in its R&D processes on the other hand (e.g., Ruiz-Mercado et al., 2012; van Wynsberghe, 2021).
Given the substantial impact of AI on the individual, economic, and societal level, AI development and use are accompanied by intensive discussions of guiding ethical principles by public and private institutions (Cowls et al., 2021; Floridi et al., 2018, 2020; Hagendorff, 2020; Jobin et al., 2019; Mittelstadt, 2019; Mittelstadt et al., 2016; Morley et al., 2020). However, the landscape of principles remains fragmented (Jobin et al., 2019) and translation into practice is needed (Mittelstadt, 2019; Morley et al., 2020). That is, recurring and prominent principles such as transparency, beneficence, and non-maleficence (Jobin et al., 2019) are normative, deontological, and high-order in nature (Hagendorff, 2020; Mittelstadt, 2019). However, translation into business and research practice might require trade-offs, context-dependent application, and consideration of different stakeholder interests. Stakeholders range from companies and organizations that (differently) interpret and apply ethical principles when developing and utilizing AI (Ryan et al., 2021), individuals that are directly or indirectly affected by AI, and eventually society at large (which is also impacted by environmental well-being). Accounting for the different stakeholders becomes particularly important when AI is operating between the priorities of promoting social good (i.e., beneficence) and preventing any harm (i.e., non-maleficence). Solving this tension to achieve a “dual advantage” for society (Floridi et al., 2018, p. 694) is at the core of the AI-for-social-good perspective (e.g., Cowls et al., 2021; Floridi et al., 2018, 2020; Taddeo & Floridi, 2018).
This conceptual study shows how accompanying ethical principles for the deployment of AI in chemical R&D can foster social and environmental good. Therefore, we present a chemical R&D process spanning cutting-edge chemical research that can substantially contribute to innovative products, methodological advances of AI, and guiding ethical principles. We rely our conceptual analysis on synthetic chemicals (i.e., pesticides) as illustrative R&D objects, since they are beneficent and maleficent at the same time and can be thus considered ethically controversial products. Thereby, our study contributes to the AI ethics literature by showing how ethical principles related to AI can be translated into business and research practice to promote environmental and social good while accounting for multiple stakeholder interests.
The remainder of our study is structured as follows. After briefly shedding light on how AI has evolved and is now applied in chemical R&D, we illustrate how chemical R&D powered by AI can be utilized for good through guiding ethical principles and consideration of all stakeholders affected. We conclude with a future outlook and a call for more collaborative, open science approaches to meet the global challenge of sustainable development.
AI in Chemical R&D
Since (laboratory) chemical research has always accumulated enormous amounts of (experimental) data on chemical and physical properties, chemical reactions and structures, and biological activities, methods from computer science have been developed and utilized in chemistry starting in the 1960s (Gasteiger, 2020). They have been subsumed under the term artificial intelligence already then (Gasteiger, 2020). Quantitative structure property/activity relationship (QSPR and QSAR) modeling are long- and well-established computational approaches for analyzing chemical data (Gasteiger, 2020; Muratov et al., 2020). QSAR models have been historically applied to computer-aided drug discovery and are used to predict or design novel chemicals with desired properties by establishing linear or non-linear relationships between values of chemical descriptors computed from molecular structure and experimentally measured properties or bioactivities of those molecules (Muratov et al., 2020). Chemical discovery does not only pertain to finding a specific molecule, but also to identifying reaction pathways, interactions between molecules, optimizing catalytic conditions, eliminating adverse side effects, and various other factors. All of them require a statistical view on chemical substance design and discovery and thus give rise to ML techniques (Tkatchenko, 2020). For instance, hybrid methods uniting ML and rule-/expert-knowledge-based approaches and more advanced deep learning models have been developed for the molecular design of synthetic chemical entities with drug-like properties and for drug discovery, respectively (e.g., Button et al., 2019; Jiménez-Luna et al., 2020). With the availability of big data, drug discovery approaches increasingly move from ML to deep learning methods due to their computational power and capacity to handle massive amounts of data (Schneider, 2018; Zhang et al., 2017). Besides, AI approaches are gaining importance in predicting toxicity of drugs and chemicals (i.e., in silico toxicity prediction), because in vitro/vivo methods are often constrained by ethical considerations, time, budget, and other resources (e.g., Idakwo et al., 2019; Vo et al., 2020). Relatedly, life-cycle impacts of chemicals have been also shown to be assessable by means of AI (e.g., Song et al., 2017). Both toxicity of chemicals and substances and their life-cycle impact can be crucial factors that affect individual and environmental well-being. In the following, we illustrate how AI in chemical R&D can be harnessed to address these and other factors by accounting for ethical principles and the various stakeholders affected.
Leveraging AI in Chemical R&D for Environmental and Social Good
We account for the calls for nexus approaches and interdisciplinary research on sustainable development and climate change (e.g., Fuso Nerini et al., 2019; Schneider et al., 2019; Seele, 2016) by presenting an AI-driven chemical R&D process (see Fig. 1) and guiding ethical and methodological principles that relate to the R&D process and its outcomes (Burget et al., 2017). We argue that orientation on and observance of these guiding principles can contribute to both process- and outcome-related sustainability.
The proposed R&D process comprises the definition of the required properties, the AI-based molecular design and in-silico characterization of the relevant properties, the ranking of the most promising candidates, and respective AI-based reaction designs as basis for laboratory experiments. Synthetic chemicals (i.e., pesticides) are used as illustrative R&D objects for two reasons. First, they are central to cost-effective production of food and efficiency gains in agricultural systems (e.g., Pretty, 2018). Second, they simultaneously pose substantial environmental threats (Bernhardt et al., 2017). In other words, R&D of pesticides can be considered an ethically salient R&D context that requires ethically responsible conduct and anticipation of potential negative side-effects along the entire R&D process. Moreover, the deployment of AI applications in the sustainability context should account for all stakeholders potentially affected, particularly, given potential tensions of collective versus individual benefits and costs (Vinuesa et al., 2020). This multiperspectivity further accounts for the AI-for-social-good perspective (e.g., Cowls et al., 2021; Floridi et al., 2018, 2020; Taddeo & Floridi, 2018). Correspondingly, we focus on the AI ethics typology suggested by this stream of research (Floridi et al., 2018; Morley et al., 2020), that is, beneficence, non-maleficence, autonomy, justice, and explicability.
Beneficence and Non-Maleficence
Scientists and experts state and warn that the imminent climate crisis is accelerating faster than expected and threatening natural ecosystems and humanity more severely than anticipated (IPCC, 2018, 2019; Ripple et al., 2020). Climate change presumably constitutes the most threatening global challenge for humanity (Coeckelbergh, 2021). That necessitates substantial increases of scale in endeavors to avoid untold suffering (IPCC, 2018; Ripple et al., 2020) and “bold solutions…that integrate environmental and societal objectives” (Arneth et al., 2021, p. 30882). Sustainability and sustainable development are pivotal to addressing these fundamental challenges. Both sustainability and sustainable development are widely used but polysemous concepts (Ben-Eli, 2018; Brown et al., 1987; Hopwood et al., 2005). Since mapping the different definitions and context-dependent meaning is out of scope of this paper, we simplistically refer to sustainability as a dynamic balancing between of human activity and environmental capacity by particularly limiting adverse environmental impacts and utilization of (natural) resources.
While some scholars argue that environmental sustainability is only worth pursuing for ethical reasons (Zagonari, 2020), others consider sustainability as an epistemic-moral hybrid (Schneider et al., 2019). Sustainability also constitutes an ethical principle and objective related to the development and deployment of AI (Jobin et al., 2019). According to the AI-for-social-good-perspective, sustainability is at the core of the beneficence principle, which inheres that AI should promote individual, social, and environmental well-being (Floridi et al., 2018). The beneficence principle is narrowly related—although not equivalent—to the tenet of non-maleficence. Non-maleficence incorporates the importance of safety, security, and privacy as well as the prevention of risks and any harm both accidentally/unintentionally (overuse) and deliberately (misuse) caused. Thus, it cautions against all potentially negative aspects and consequences of AI development and use (Floridi et al., 2018; Jobin et al., 2019).
AI in chemical R&D contributes to sustainability in two ways. First, AI simulations supplemented by quantum chemical (QC) predictions dematerialize and digitalize conventional lab experiments. That is, research questions (e.g., focusing toxicity and acidity) are addressed and answered by AI simulations (i.e., in-silicio characterization) instead of resource-intensive laboratory experiments. That can eventually result in substantial resource-efficiency gains and lower costs (e.g., due to minimization of material usage), that is, R&D process-related sustainability. The R&D process for synthetic chemicals such as pesticides become more resource-efficient and sustainable, because intended reductions in doses and environmental half-lives of active substances are usually accompanied by more complex molecular structures of synthetic chemicals. They in turn increase the resource utilization during the R&D process (Geisler et al., 2005). Of course, AI applications and systems have an ecological footprint themselves and can have rebound effects caused by energy consumption and emissions of AI development, production, and deployment (e.g., Dhar, 2020). That is, certain short-term trade-offs can occur, which is not unusual for sustainability efforts in general (e.g., De Neve & Sachs, 2020). Nevertheless, if AI-enabled simulations are going to be used at scale for a variety of research questions, we assume that the marginal (environmental) costs and impact of use will substantially decline and not outweigh their resource-efficiency gains in the long run.
Second, AI-based simulations relax the constraints of conventional laboratory research. Researchers can significantly extend the scope of research questions due to the computational power of AI and a vast array of scientific research and secondary, although often unstructured data. That is, simulations allow to simultaneously investigate a greater amount and diversity of relevant research questions and to substitute widespread, but inefficient one-parameter-at-a-time methods (e.g., Schneider, 2018). That is particularly relevant in the sustainability context when it comes to both effectivity (i.e., beneficence) and potential negative side effects (i.e., non-maleficence) of substances and prospect product candidates. Generally, the sheer volume, diversity, and intensity of use of chemicals can impede risk assessments and pose substantial environmental challenges (Johnson et al., 2020). Moreover, identifying and tracking chemicals and related (bioactive) transformation products in the environment and at ever-lower concentrations in human bodies is hampered by the complex mixture of thousands of chemicals the environment and humans are exposed to from multiple sources through multiple pathways (Escher et al., 2020). Therefore, anticipating and quantifying chemicals’ (detrimental) environmental impact requires comprehensive research activities, with synthetic chemicals like pesticides being no exceptions.
In light of calls for sustainable and ecological intensification of agricultural systems, that is, increased agricultural yields without the conversion of additional non-agricultural land and adverse environmental impact (e.g., Cassman & Grassini, 2020; Geertsema et al., 2016; Godfray & Garnett, 2014; Loos et al., 2014; Pretty, 2018; Pretty & Bharucha, 2014), synthetic chemicals like pesticides are a double-edged sword. Their beneficial role for pest management, crop yield, and food security (Cooper & Dobson, 2017) is compromised by pesticide resistance (e.g., Gould et al., 2018), reduction of biodiversity (e.g., Beketov et al., 2013; Dudley et al., 2017), and other negative externalities for human health and natural systems (Bernhardt et al., 2017; Pretty & Bharucha, 2015; Tilman et al., 2002). Conventional laboratory, experimental research does not sufficiently predict the individual and collective impact of synthetic chemicals on ecosystems, since their toxicity depend on reactions or interactions with other chemicals in natural environments, transformations by organisms, or exposure to natural light (Bernhardt et al., 2017).
AI with its self-learning capabilities and in combination with large-scale, increasingly rich, and high-dimensional research data (Vermeulen et al., 2020; Vinuesa et al., 2020) has the potential to account for these complex environmental interactions, interdependencies, and externalities. By feeding ML algorithms with relevant multifaceted scientific and secondary data (see Fig. 2), AI simulations in association with QC predictions will, in future, be able to define substance properties that are best-suited for areas of applications and surrounding circumstances and environmental factors (i.e., in-silico characterization and ranking) to maximize beneficence while limiting maleficence. Thereby, AI in chemical R&D also account for the green chemistry principles that include less hazardous chemical syntheses, designing safer chemicals, and inherently safer chemistry for accident prevention, among other things (e.g., Anastas & Warner, 1998; Anastas & Zimmerman, 2003; Erythropel et al., 2018; Zimmerman et al., 2020). Potential factors taken into consideration in ML models relate to life-cycle and environmental impact assessment categories and can comprise human toxicity, aquatic and terrestrial ecotoxicity, and acidification (Geisler et al., 2005).
However, a prerequisite for accurate and valid ML model predictions are curated consistent data sets (e.g., de Almeida et al., 2019; Schneider, 2018), since biases, inaccuracies, errors, and mistakes inherent in data could lead to biased results and false conclusions (Barredo Arrieta et al., 2020; Morley et al., 2020). Research findings of simulations, ML predictions, and subsequent laboratory research have to continuously complement research data bases, which, in turn, informs follow-up or related R&D. A data life cycle that enhances self-learning capabilities and fast feedback loops emerges. Furthermore, final and preliminary results have to be documented for subsequent life-cycle assessments, registration and approval processes, and to potentially provide them to other researchers or make them entirely publicly available by pursuing open science approaches (Rüegg et al., 2014). To optimize the re-use of research findings and scientific data, corresponding scientific data management structures should follow the FAIR principles for scientific data management, that is, Findable, Accessible, Interoperable, Reusable (Wilkinson et al., 2016). In this way, such a data life cycle (see Fig. 2) has intra-organizational epistemic and methodological advantages such as increased accuracy and validity of model predictions through broader data bases, feedback loops, and more comprehensive model training, but also provides benefits for external stakeholders. Accuracy and validity of ML predictions as well as ensuring beneficence and non-maleficence across stakeholders narrowly relate to the justice principle.
By integrating broad and diverse data resources and a multi-stakeholder perspective, AI-based R&D processes incorporate the justice principle, which should also guide sustainable intensification (Loos et al., 2014) and environmental sustainability (Zagonari, 2020). The justice principles espouses fairness and the prevention of unwanted/unfair biases and discrimination, also amending past inequities (Jobin et al., 2019; Morley et al., 2020). Justice further entails sharing benefits and prosperity and fostering solidarity (Floridi et al., 2018), the latter being stipulated to be considered as a focal ethical principle on its own (Luengo-Oroz, 2019). In the AI-driven R&D context, research outcomes related to products but also scientific data have to be equally beneficial and non-discriminatory in respect to all stakeholders affected, within and across countries and regions. That is particularly important given the disproportionally adverse effects of climate change for poorer countries (IPCC, 2018) and the differences in pesticide use, efficiency of agricultural systems, and food security across countries (Cassman & Grassini, 2020; Pretty, 2018; Pretty & Bharucha, 2015). Developing countries are at the higher risks than developed countries, since there is no equity in the global distribution of chemical pollutants and their negative environmental externalities (Escher et al., 2020).
Bolstering and broadening R&D capacities through ML and QC simulations can be an initial step and future cornerstone of the mass customization of products deployed in agricultural systems. Thereby, agricultural systems’ idiosyncrasies can be taken into account in a fair and a cost- and resource-efficient way. Evidence and views of the benefits and costs of ethically controversial substances and products such as pesticides are far from unequivocal (Pretty & Bharucha, 2015). Hence, context-dependent evaluations considering all stakeholders and external (environmental) conditions are imperative and should inform judgments about interferences with ethical principles. In some circumstances, such assessments will require trade-offs between benefits (e.g., increase of crop yield enabling food security due to pesticide efficiency) and costs (e.g., adverse effects on biodiversity in certain environments). However, as human judgments can be error-prone, biased, and discriminating, so can AI predictions and inferences (Rich & Gureckis, 2019). As mentioned above, focal sources of biased predictions are biases in and skewness of underlying data (Barredo Arrieta et al., 2020; Morley et al., 2020; Vinuesa et al., 2020). Sources of biases include but are not limited to misleading proxy features (Barredo Arrieta et al., 2020) or sparse (small) data (de Almeida et al., 2019; Rich & Gureckis, 2019). Biases can further result from researchers themselves through personal preferences and biases and (chemical) education, which can also unwillingly narrow search spaces (de Almeida et al., 2019; Schneider, 2018).
In light of human- and AI-induced biases, the autonomy principle and balancing human and AI agency becomes and will remain decisive in ethically salient R&D contexts. In the AI ethics context, autonomy features a meta-autonomy or a decide-to-delegate model, that is, “humans should always retain the power to decide which decisions to take” on their own or when to cede decision-making control (Floridi et al., 2018, p. 698). Autonomy in relation to AI applications and systems requires human agency (i.e., autonomous human decisions) and human oversight (Morley et al., 2020). As human researchers encounter difficulties to unambiguously grasp and determine utilities of R&D outcomes for different stakeholders, so does AI to an even higher degree (Butkus, 2020). Ethically controversial questions about environmental compatibility or toxicity of substances related to both humans and the environment necessitate human oversight and foresight (e.g., Floridi & Strait, 2020). However, human agency and decision-making do not only pertain to judging final AI predictions, but to the entire R&D process ranging from research question formulation, definition and assessment of required properties (dependent on areas of application), and evaluations of AI-based solutions. Correspondingly, de Almeida et al. (2019) noted that:
the right research questions must be asked prior to deploying the AI and its domain of applicability, advantages and limitations need to be well understood in order to assess the utility and appropriateness of a given algorithm for a particular task (p. 601).
Since artificial moral agency is still in its infancy (Cervantes et al., 2020), we propose a combination of human, AI, and shared agency along the R&D process (see Fig. 1). AI-driven chemical R&D can then incorporate multi-objective maximum-expected-utility concepts that are aligned to human values and ethical principles (e.g., Vamplew et al., 2018). Eventually, humans are in charge of equipping AI systems and their utility functions with ethical judgments capacities by decide about the respective AI design approach. Correspondingly, developers have to decide whether AI systems base their ethical decision-making on pre-defined ethical theories (top-down), on more flexible self-learning mechanisms based on certain values (bottom-up), or on both (hybrid) (Bonnemains et al., 2018; Cervantes et al., 2020).
To take on this challenge, collaboration and exchange with others stakeholders (e.g., Flipse et al., 2013) or ethicists, that is, an embedded ethics approach (e.g., Bonnemains et al., 2018; Brey, 2000; McLennan et al., 2020; Moor, 2005), to translate ethical principles into AI-powered business and research practice have to be contemplated. In the future, entirely autonomous AI decision-making in chemical R&D in the form of self-driving laboratories and closed-loop approaches (e.g., Häse et al., 2019; Muratov et al., 2020) are promising, but complicated in the case of ethically controversial R&D outcomes given the long way to go to achieve artificial moral agency (e.g., Cervantes et al., 2020).
Although researchers define required properties and should be involved in the research process (human agency), an understanding of how the AI works and predictions are derived is essential. That is at the core of intelligibility, the epistemological dimension of explicability (Floridi et al., 2018). The concepts intelligibility, comprehensibility, interpretability, explainability, and transparency are often used interchangeably and inconsistently (Barredo Arrieta et al., 2020), and are partly misconceived (Rudin, 2019). In a comprehensive review, Barredo Arrieta et al. (2020) identified intelligibility, that is, human understanding of a model’s function without any need for explaining its internal structure or underlying data processing algorithm, as the most appropriate conceptualization.
In the R&D context, the intelligibility principle is multi-faceted. While researchers that are directly involved in the AI development process and oversee the R&D process should have an in-depth understanding of underlying data and AI models’ structures and functions, a basic understanding might suffice for other internal organizational stakeholders. Otherwise, too complex and incomprehensible explanations and overly complicated decision pathways can impend (Ananny & Crawford, 2018; Rudin, 2019). From an external perspective, the intelligibility principle is regularly limited or diluted by proprietary boundaries and intellectual property right restrictions in case of commercial product development (e.g., Ananny & Crawford, 2018; Mittelstadt et al., 2016). However, provision of certain information to and intelligibility of external stakeholders can simplify and accelerate external life-cycle assessments, registration and approval processes (see Fig. 2), and foster collaborative actions to pursue sustainability objectives (e.g., open science approaches).
In general, intelligibility is central to AI-powered R&D, because it constitutes a proethical condition for enabling or imparing judgments of beneficence, non-maleficence, justice, and autonomy (Turilli & Floridi, 2009). Understanding the functionalities of AI (i.e., intelligibility) can inform evaluations of the other principles by comprehending if and how AI benefits (beneficence) or harms (non-maleficence) individuals and society in a fair and unbiased way (justice) and by drawing conclusions about whether to delegate decisions to AI systems (autonomy) (Floridi et al., 2018).
On the other hand, accountability focusses on who is responsible for the way AI works, that is, the ethical dimension of explicability (Floridi et al., 2018). It is narrowly related to intelligibility (e.g.,Coeckelbergh, 2020; Lepri et al., 2018; Martin, 2019; Morley et al., 2020), since judgments about accountability necessitate a certain understanding of the underlying processes of AI systems and applications (i.e., intelligibility) (Lepri et al., 2018). Accountability can create shared responsibility within the organization and responsibility towards external stakeholders, which is particularly relevant in ethically salient contexts. It can be backward-looking, that is, who is ascribed responsibility when something goes wrong, and forward-looking, that is, how can AI systems be designed and used responsibly (Coeckelbergh, 2020). Both views matter for the AI-based R&D of ethically controversial products like pesticides and prompt that human researchers have to be kept in the loop, oversee R&D processes, and anticipate and foresee ethical issues (e.g., adverse effects of substances) for the time being.
Taken together, explicability of AI-based R&D is pivotal to meet the global challenge of sustainable development and develop joint actions, and it accounts for both the AI-for-good-perspective (Taddeo & Floridi, 2018) and the collaborative, open-science and transparency stance in ecological and sustainability research (Bausch et al., 2014; Rüegg et al., 2014; Seele, 2016).
Chemical R&D can be fundamental to solutions that underpin and accelerate sustainable development. Since sustainability initiatives and research always have an ethical dimension (e.g., Schneider et al., 2019; Zagonari, 2020), chemical R&D that are powered by AI and pursue sustainable products and solutions have to be particularly open and explicit about guiding ethical principles and the alignment with existing guidelines (Vinuesa et al., 2020). In light of the rapid advancement of AI, chemical R&D will contribute to the development of sustainable substances and products in the future (e.g., biopesticides; Pretty, 2018) by means of sustainable and resource-efficient R&D processes. Particularly, self-driving laboratories provide promising opportunities (e.g., Häse et al., 2019; Muratov et al., 2020), although human researchers have to remain in the loop for the time being, particularly, in ethically salient research contexts. Notwithstanding, researchers might “soon address challenges that previously were simply considered to be prohibitively complex or demanding, such as automatized experimentation or synthesis of new materials and molecules on demand” (von Lilienthal & Burke, 2020, p. 3). AI can be a game changer to address sustainable development and climate change (Kaplan & Haenlein, 2020), and through chemical R&D, the fuel of AI can be added to the fire of sustainability efforts.
Respective scientific data and knowledge are irreplaceable in a volatile, uncertain, complex, and ambiguous environment, and key conduit to knowledge discovery, integration, and innovation (Rüegg et al., 2014; Wilkinson et al., 2016). Hence, insights generated in the course of AI development and refinement to foster sustainability and related sustainability research findings can be considered a social good. Therefore, a more collaborative, open science approach should be preferred to restrictive proprietary and institutional boundaries on the one hand. On the other hand, scientific data should be managed and potentially made accessible to facilitate seamless re-use and collaboration opportunities to tackle the global challenge of sustainable development and climate change.
Anastas, P. T., & Warner, J. C. (1998). Green chemistry: Theory and practice. Oxford University Press.
Anastas, P. T., & Zimmerman, J. B. (2003). Design through the 12 principles of green engineering. Environmental Science & Technology, 37(5), 94A-101A. https://doi.org/10.1021/es032373g
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Arneth, S.Y.-J., Leadley, P., Rondinini, C., Bukvareva, E., Kolb, M., Midgley, G. F., Oberdorff, T., Palomo, I., & Saito, O. (2020). Post-2020 biodiversity targets need to embrace climate change. Proceedings of the National Academy of Science, 117(49), 30882–30891. https://doi.org/10.1073/pnas.2009584117
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Benneto, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
Bausch, J. C., Bojórquez-Tapia, L., & Eakin, H. (2014). Agro-environmental sustainability assessment using multicriteria decision analysis and system analysis. Sustainability Science, 9(3), 303–319. https://doi.org/10.1007/s11625-014-0243-y
Beketov, M. A., Kefford, B. J., Schäfer, R. B., & Liess, M. (2013). Pesticides reduce regional biodiversity of stream invertebrates. Proceedings of the National Academy of Science, 110(27), 11039–11043. https://doi.org/10.1073/pnas.1305618110
Ben-Eli, M. U. (2018). Sustainability: Definition and five core principles, a systems perspective. Sustainability Science, 13(5), 1337–1343. https://doi.org/10.1007/s11625-018-0564-3
Bernhardt, E. S., Rosi, E. J., & Gessner, M. O. (2017). Synthetic chemicals as agents of global change. Frontiers in Ecology and the Environment, 15(2), 84–90. https://doi.org/10.1002/fee.1450
Bonnemains, V., Saure, C., & Tessier, C. (2018). Embedded ethics: Some technical and ethical challenges. Ethics and Information Technology, 20(1), 41–58. https://doi.org/10.1007/s10676-018-9444-x
Brey, P. A. E. (2000). Method in computer ethics: Towards a multi-level interdisciplinary approach. Ethics and Information Technology, 2(2), 125–129. https://doi.org/10.1023/A:1010076000182
Brown, B. J., Hanson, M. E., Liverman, D. M., & Merideth, R. W. (1987). Global sustainability: Toward definition. Environmental Management, 11(6), 713–719. https://doi.org/10.1007/BF01867238
Brynjolfsson, E., & Mitchell, T. M. (2017). What can machine learning do? Workforce Implications. Science, 358(6370), 1530–1534. https://doi.org/10.1126/science.aap8062
Burget, M., Bardone, E., & Pedaste, M. (2017). Definitions and conceptual dimensions of responsible research and innovation: A literature review. Science and Engineering Ethics, 23(1), 1–19. https://doi.org/10.1007/s11948-016-9782-1
Butkus, M. A. (2020). The human side of artificial intelligence. Science and Engineering Ethics, 26(5), 2427–2437. https://doi.org/10.1007/s11948-020-00239-9
Button, A., Merk, D., Hiss, J. A., & Schneider, G. (2019). Automated de novo molecular design by hybrid machine intelligence and rule-driven chemical synthesis. Nature Machine Intelligence, 1(7), 307–315. https://doi.org/10.1038/s42256-019-0067-7
Cassman, K. G., & Grassini, P. (2020). A global perspective on sustainable intensification research. Nature Sustainability, 3(4), 262–268. https://doi.org/10.1038/s41893-020-0507-8
Cervantes, J. A., López, S., Rodríguez, L. F., Cervantes, S., Cervantes, F., & Ramos, F. (2020). Artificial moral agents: A survey of the current status. Science and Engineering Ethics, 26(2), 501–532. https://doi.org/10.1007/s11948-019-00151-x
Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051–2068. https://doi.org/10.1007/s11948-019-00146-8
Coeckelbergh, M. (2021). AI for climate: Freedom, justice, and other ethical and political challenges. AI and Ethics, 1(1), 67–72. https://doi.org/10.1007/s43681-020-00007-2
Cooper, B., & Dobson, H. (2017). The benefits of pesticides to mankind and theenvironment. Crop Protection, 26(9), 1337–1348. https://doi.org/10.1016/j.cropro.2007.03.022
Cowls, J., Tsamados, A., Taddeo, M., & Floridi, L. (2021). A definition, benchmark and database of AI for social good initiatives. Nature Machine Intelligence, 3(2), 111–115. https://doi.org/10.1038/s42256-021-00296-0
de Almeida, A. F., Moreira, R., & Rodrigues, T. (2019). Synthetic organic chemistry driven by artificial intelligence. Nature Reviews Chemistry, 3(10), 589–604. https://doi.org/10.1038/s41570-019-0124-0
De Neve, J.-E., & Sachs, J. D. (2020). The SDGs and human well-being: A global analysis of synergies, trade-offs, and regional differences. Scientific Reports, 10, 15113. https://doi.org/10.1038/s41598-020-71916-9
Di Vaio, A., Palladino, R., Hassan, R., & Escobar, O. (2020). Artificial intelligence and business models in the sustainable development goals perspective: A systematic literature review. Journal of Business Research, 121, 283–314. https://doi.org/10.1016/j.jbusres.2020.08.019
Dhar, P. (2020). The carbon impact of artificial intelligence. Nature Machine Intelligence, 2(8), 423–425. https://doi.org/10.1038/s42256-020-0219-9
Dral, P. O. (2020). Quantum chemistry in the age of machine learning. Journal of Physical Chemistry Letters, 11(6), 2336–2347. https://doi.org/10.1021/acs.jpclett.9b03664
Dudley, N., Attwood, S. J., Goulson, D., Jarvis, D., Bharucha, Z. P., & Pretty, J. (2017). How should conservationists respond to pesticides as a driver of biodiversity loss in agroecosystems? Biological Conservation, 209, 449–453. https://doi.org/10.1016/j.biocon.2017.03.012
Erythropel, H. C., Zimmerman, J. B., de Winter, T. M., Petitjean, L., Melnikov, F., Lam, C. H., Lounsbury, A. W., Mellor, K. E., Janković, N. Z., Tu, Q., Pincus, L. N., Falinski, M. M., Shi, W., Coish, P., Plata, D. L., & Anastas, P. T. (2018). The Green ChemisTREE: 20 years after taking root with the 12 principles. Green Chemistry, 20(9), 1929–1961. https://doi.org/10.1039/C8GC00482J
Escher, B. I., Stapleton, H. M., & Schymanski, E. L. (2020). Tracking complex mixtures of chemicals in our changing environment. Science, 367(6476), 388–392. https://doi.org/10.1126/science.aay6636
Flipse, S. M., van der Sanden, M. C. A., & Osseweijer, P. (2013). The why and how of enabling the integration of social and ethical aspects in research and development. Science and Engineering Ethics, 19(3), 702–725. https://doi.org/10.1007/s11948-012-9423-2
Floridi, L., & Strait, A. (2020). Ethical foresight analysis: What it is and why it is needed? Minds and Machines, 30(1), 77–97. https://doi.org/10.1007/s11023-020-09521-y
Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020). How to design AI for social good: Seven essential factors. Science and Engineering Ethics, 26(3), 1771–1796. https://doi.org/10.1007/s11948-020-00213-5
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Fuso Nerini, F., Sovacool, B., Hughes, N., Cozzi, L., Cosgrave, E., Howells, M., Tavoni, M., Tomei, J., Zerriffi, H., & Milligan, B. (2019). Connecting climate action with other Sustainable Development Goals. Nature Sustainability, 2(8), 674–680. https://doi.org/10.1038/s41893-019-0334-y
Gasteiger, J. (2020). Chemistry in times of artificial intelligence. ChemPhysChem, 21(20), 2233–2242. https://doi.org/10.1002/cphc.202000518
Geertsema, W., Rossing, W. A. H., Landis, D. A., Bianchi, F. J. J. A., van Rijn, P. C. J., Schaminée, J. H. J., Tscharntke, T., & van der Werf, W. (2016). Actionable knowledge for ecological intensification of agriculture. Frontiers in Ecology and the Environment, 14(4), 209–216. https://doi.org/10.1002/fee.1258
Geisler, G., Hellweg, S., Hofstetter, T. B., & Hungerbuehler, K. (2005). Life-cycle assessment in pesticide product development: Methods and case study on two plant-growth regulators from different product generations. Environmental Science & Technology, 39(7), 2406–2413. https://doi.org/10.1021/es049145m
Godfray, H. C. J., & Garnett, T. (2014). Food security and sustainable intensification. Philosophical Transactions of the Royal Society B, 369(1639), 20120273. https://doi.org/10.1098/rstb.2012.0273
Gould, F., Brown, Z. S., & Kuzma, J. (2018). Wicked evolution: Can we address the sociobiological dilemma of pesticide resistance? Science, 360(6369), 728–732. https://doi.org/10.1126/science.aar3780
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8
Häse, F., Roch, L. M., & Aspuru-Guzik, A. (2019). Next-generation experimentation with self-driving laboratories. Trends in Chemistry, 1(3), 282–291. https://doi.org/10.1016/j.trechm.2019.02.007
Hopwood, B., Mellor, M., & O’Brien, G. (2005). Sustainable development: Mapping different approaches. Sustainable Development, 13(1), 38–52. https://doi.org/10.1002/sd.244
IPCC. (2018). Global warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty. IPCC (Intergovernmental Panel on Climate Change).
IPCC. (2019). Climate change and land. An IPCC Special Report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems. IPCC (Intergovernmental Panel on Climate Change).
Idakwo, G., Luttrell, J., Chen, M., Hong, H., Zhou, Z., Gong, P., & Zhang, C. (2019). A review on machine learning methods for in silico toxicity prediction. Journal of Environmental Science and Health, Part c: Toxicology and Carcinogenesis, 36(4), 169–191. https://doi.org/10.1080/10590501.2018.1537118
Jiménez-Luna, J., Grisoni, F., & Schneider, G. (2020). Drug discovery with explainable artificial intelligence. Nature Machine Intelligence, 2(10), 573–584. https://doi.org/10.1038/s42256-020-00236-4
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Johnson, A. C., Jin, X., Nakada, N., & Sumpter, J. P. (2020). Learning from the past and considering the future of chemicals in the environment. Science, 367(6476), 384–387. https://doi.org/10.1126/science.aay6637
Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255–260. https://doi.org/10.1126/science.aaa8415
Kaplan, A., & Haenlein, M. (2020). Rulers of the world, unite! The challenges and opportunities of artificial intelligence. Business Horizons, 63(1), 37–50. https://doi.org/10.1016/j.bushor.2019.09.003
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. The premise, the proposed solutions, and the open challenges. Philosophy & Technology, 31(4), 611–627. https://doi.org/10.1007/s13347-017-0279-x
Loos, J., Abson, D. J., Chappel, M. J., Hanspach, J., Mikulcak, F., Tichit, M., & Fischer, J. (2014). Putting meaning back into “sustainable intensification.” Frontiers in Ecology and the Environment, 12(6), 356–361. https://doi.org/10.1890/130157
Luengo-Oroz, M. (2019). Solidarity should be a core ethical principle of AI. Nature Machine Intelligence, 1(11), 494. https://doi.org/10.1038/s42256-019-0115-3
Mabkhot, M. M., Ferreira, P., Maffei, A., Podržaj, P., Mądziel, M., Antonelli, D., Lanzetta, M., Barata, J., Boffa, E., Finžgar, M., Paśko, Ł, Minetola, P., Chelli, R., Nikghadam-Hojjati, S., Wang, X. V., Priarone, P. C., Lupi, F., Litwin, P., Stadnicka, D., & Lohse, N. (2021). Mapping Industry 4.0 enabling technologies into United Nations Sustainability Development Goals. Sustainability, 13(5), 2560. https://doi.org/10.3390/su13052560
Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835–850. https://doi.org/10.1007/s10551-018-3921-3
McLennan, S., Fiske, A., Celi, L. A., Müller, R., Harder, J., Ritt, K., Haddadin, S., & Buyx, A. (2020). An embedded ethics approach for AI development. Nature Machine Intelligence, 2(9), 488–490. https://doi.org/10.1038/s42256-020-0214-1
Mittelstadt, B. D. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679
Moor, J. H. (2005). Why we need better ethics for emerging technologies. Ethics and Information Technology, 7(3), 111–119. https://doi.org/10.1007/s10676-006-0008-0
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26(4), 2141–2168. https://doi.org/10.1007/s11948-019-00165-5
Muratov, E. N., Bajorath, J., Sheridan, R. P., Tetko, I. V., Filimonov, D., Poroikov, V., Oprea, T. I., Baskin, I. I., Varnek, A., Roitberg, A., Isayev, O., Curtalolo, S., Fourches, D., Cohen, Y., Aspuru-Guzik, A., Winkler, D. A., Agrafiotis, D., Cherkasov, A., & Tropsha, A. (2020). QSAR without borders. Chemical Society Reviews, 49(1), 3525–3564. https://doi.org/10.1039/D0CS00098A
Nishant, R., Kennedy, M., & Corbett, J. (2020). Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. International Journal of Information Management, 53, 102104. https://doi.org/10.1016/j.ijinfomgt.2020.102104
Pretty, J. (2018). Intensification for redesigned and sustainable agricultural systems. Science, 362(6417), 908. https://doi.org/10.1126/science.aav0294
Pretty, J., & Bharucha, Z. P. (2014). Sustainable intensification in agricultural systems. Annals of Botany, 114(8), 1571–1596. https://doi.org/10.1093/aob/mcu205
Pretty, J., & Bharucha, Z. P. (2015). Integrated pest management for sustainable intensification of agriculture in Asia and Africa. Insects, 6(1), 152–182. https://doi.org/10.3390/insects6010152
Rich, A. S., & Gureckis, T. M. (2019). Lessons for artificial intelligence from the study of natural stupidity. Nature Machine Intelligence, 1(4), 174–180. https://doi.org/10.1038/s42256-019-0038-z
Ripple, W. J., Wolf, C., Newsome, T. M., Barnard, P., Moomaw, W. R., et al. (2020). World scientists’ warning of a climate emergency. BioScience, 70(1), 8–12. https://doi.org/10.1093/biosci/biz088
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
Rüegg, J., Gries, C., Bond-Lamberty, B., Bowen, G. J., Felzer, B. S., McIntyre, N. E., Soranno, P. A., Vanderbilt, K. L., & Weathers, K. C. (2014). Completing the data life cycle: Using information management in macrosystems ecology research. Frontiers in Ecology and the Environment, 12(1), 24–30. https://doi.org/10.1890/120375
Ruiz-Mercado, G. J., Smith, R. L., & Gonzalez, M. A. (2012). Sustainability indicators for chemical processes: I. Taxonomy. Industrial & Engineering Chemistry Research, 51(5), 2309–2328. https://doi.org/10.1021/ie102116e
Ryan, M., Antoniou, J., Brooks, L., Jiya, T., Macnish, K., & Stahl, B. (2021). Research and practice of AI ethics: A case study approach juxtaposing academic discourse with organisational reality. Science and Engineering Ethics, 27, 16. https://doi.org/10.1007/s11948-021-00293-x
Schneider, G. (2018). Automating drug discovery. Nature Reviews Drug Discovery, 17(2), 97–113. https://doi.org/10.1038/nrd.2017.232
Schneider, G. (2019). Mind and machine in drug design. Nature Machine Intelligence, 1(3), 128–130. https://doi.org/10.1038/s42256-019-0030-7
Schneider, F., Kläy, A., Zimmermann, A. B., Buser, T., Ingalls, M., & Messerli, P. (2019). How can science support the 2030 Agenda for Sustainable Development? Four tasks to tackle the normative dimension of sustainability. Sustainability Science, 14(6), 1593–1604. https://doi.org/10.1007/s11625-019-00675-y
Seele, P. (2016). Envisioning the digital sustainability panopticon: A thought experiment of how big data may help advancing sustainability in the digital age. Sustainability Science, 11(5), 845–854. https://doi.org/10.1007/s11625-016-0381-5
Song, R., Keller, A. A., & Suh, S. (2017). Rapid life-cycle impact screening using artificial neural networks. Environmental Science & Technology, 51(18), 10777–10785. https://doi.org/10.1021/acs.est.7b02862
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752. https://doi.org/10.1126/science.aat5991
Tilman, D., Cassman, K. G., Matson, P. A., Naylor, R., & Polasky, S. (2002). Agricultural sustainability and intensive production practices. Nature, 418(6898), 671–677. https://doi.org/10.1038/nature01014
Tkatchenko, A. (2020). Machine learning for chemical discovery. Nature Communications, 11, 4125. https://doi.org/10.1038/s41467-020-17844-8
Turilli, M., & Floridi, L. (2009). The ethics of information transparency. Ethics and Information Technology, 11(2), 105–112. https://doi.org/10.1007/s10676-009-9187-9
Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27–40. https://doi.org/10.1007/s10676-017-9440-6
van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics. https://doi.org/10.1007/s43681-021-00043-6
Vermeulen, R., Schymanski, E. L., Barabási, A.-L., & Miller, G. W. (2020). The exposome and health: Where chemistry meets biology. Science, 367(6476), 392–396. https://doi.org/10.1126/science.aay3164
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A., Langhans, S. D., Tegmark, M., & Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 11, 233. https://doi.org/10.1038/s41467-019-14108-y
Vo, A. H., Van Vleet, T. R., Gupta, R. R., Liguori, M. J., & Rao, M. S. (2020). An overview of machine learning and big data for drug toxicity evaluation. Chemical Research in Toxicology, 33(1), 20–37. https://doi.org/10.1021/acs.chemrestox.9b00227
von Lilienfeld, O. A., & Burke, K. (2020). Retrospective on a decade of machine learning for chemical discovery. Nature Communications, 11, 4895. https://doi.org/10.1038/s41467-020-18556-9
Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J. W., da Silva Santos, L. B., & Bourne, P. E. (2016). The FAIR guiding principles for scientific data management and stewardship. Scientific Data, 3, 160018. https://doi.org/10.1038/sdata.2016.18
Wei, J. N., Duvenaud, D., & Aspuru-Guzik, A. (2016). Neural networks for the prediction of organic chemistry reactions. ACS Central Science, 2(10), 725–732. https://doi.org/10.1021/acscentsci.6b00219
Zagonari, F. (2020). Environmental sustainability is not worth pursuing unless it is achieved for ethical reasons. Palgrave Communications, 6, 108. https://doi.org/10.1057/s41599-020-0467-7
Zhang, L., Tan, J., Han, D., & Zhu, H. (2017). From machine learning to deep learning: Progress in machine intelligence for rational drug discovery. Drug Discovery Today, 22(11), 1680–1685. https://doi.org/10.1016/j.drudis.2017.08.010
Zimmerman, J. B., Anastas, P. T., Erythropel, H. C., & Leitner, W. (2020). Designing for a green chemistry future. Science, 367(6476), 397–400. https://doi.org/10.1126/science.aay3060
Open Access funding enabled and organized by Projekt DEAL. No funding was received to assist with the preparation of this manuscript.
Conflict of interest
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Hermann, E., Hermann, G. & Tremblay, JC. Ethical Artificial Intelligence in Chemical Research and Development: A Dual Advantage for Sustainability. Sci Eng Ethics 27, 45 (2021). https://doi.org/10.1007/s11948-021-00325-6
- Artificial intelligence
- Research and development