Collection

AI Ethics in the Generative AI Era

In less than a year, the explosive proliferation of so-called “foundation models” and generative AI (GenAI) applications has ushered in an unprecedented commercialization of AI technologies. Within a few months of its release, ChatGPT captured the public attention, amassing over 100 million users and triggering an “age of competition” among large tech corporations vying for market share amidst the GenAI boom. In this topical collection, we will explore the ethical and societal implications of the rapid development and spread of GenAI technologies. Areas of interest include: the potential for scaled AI-generated disinformation and misinformation; novel challenges to academic and research integrity; skills loss and overreliance; algorithmic bias; discriminatory amplification or obfuscation of voices and values in AI generated content; reification of dominant cultural perspectives that endangers the voices of historically marginalised groups; labor displacement; environmental impacts; governance challenges; and emerging prospects for marshalling GenAI applications for the public good. The goal of the collection is to stimulate rigorous, interdisciplinary, and accessible analysis of the potential risks, opportunities, and ethical impacts created by the accelerating development of GenAI techniques and associated applications.

Editors

  • David Leslie

    David Leslie, Director of Ethics and Responsible Innovation Research at The Alan Turing Institute. He is also Professor of Ethics, Technology and Society, The Digital Environment Research Institute at Queen Mary University of London. He is a philosopher and social theorist, whose research focuses on the ethics of emerging technologies, AI governance, data justice and the social and ethical impacts of AI, machine learning and data-driven innovations. dleslie@turing.ac.uk

  • Mhairi Aitken

    Mhairi Aitken is an Ethics Fellow in the Public Policy Programme at The Alan Turing Institute and an Honorary Senior Fellow at Australian Centre for Health Engagement, Evidence and Values (ACHEEV) at the University of Wollongong in Australia. She is a Sociologist whose research examines social and ethical dimensions of digital innovation particularly relating to uses of data and AI. Mhairi has a particular interest in the role of public engagement in informing ethical data practices. In recognition of her work in the area, Mhairi has become a major public voice in the GenAI/ChatGPT UK press debates. maitken@turing.ac.uk

  • Atoosa Kasirzadeh

    Atoosa Kasirzadeh, an Assistant Professor and Chancellor’s Fellow at Edinburgh, is a philosopher and ethicist of science and emerging technologies, an applied mathematician, and an engineer. Currently, she is the Director of Research at the Centre for Technomoral Futures in the Futures Institute at the University of Edinburgh. Her recent work is focused on the implications of machine learning, in particular large language models and other models for science, society and humanity. atoosa.kasirzadeh@ed.ac.uk

  • Rebecca Johnson

    Rebecca Johnson, doctoral researcher at The University of Sydney, works on the big questions around AI and Ethics, and has developed into a leader in the fields of Tech Ethics with a focus on Large Language Models (LLMs) and Generative AI (GAI). She uses sociotechnical approaches to understand how we imbue emerging technologies such as AI with our own biases and values and how these ethics are amplified and reflected. rebecca.johnson@sydney.edu.au

  • Peter Smith

    Peter Smith, Emeritus Professor, University of Sunderland, is a Principal Fellow of The Higher Education Academy and Fellow of the British Computer Society. He is a prolific writer in the fields of Artificial Intelligence, Computer Science, and Mathematics and is a member of the Editorial Board of AI and Ethics.peter.smith@sunderland.ac.uk

  • Harish Arunachalam

    Harish Arunachalam, Principal Data Scientist in the Responsible AI group at Verizon, studies the enterprise level impact and risks of advanced artificial intelligence systems. He develops tools, frameworks, and methods to measure and mitigate impacts. His Ph.D. work is in the fields of artificial intelligence, machine learning, and computer science. He is a member of the ACM Technology Policy group and recently worked on the ACM Generative AI guidelines. dr.harish.arunachalam@outlook.com

Articles (1 in this collection)