1 Introduction

Advances in artificial intelligence, partly due to the availability of massive amounts of data, have recently stirred both public and academic debates about the opportunities but also the risks posed by these developments. And since the advent of systems such as ChatGPT or DALL-E, which are easy-to-use and at the same time very powerful tools for generating textual or visual content, the disruptive impact of artificial intelligence in many societal domains can no longer be ignored. Deep philosophical questions underlie seemingly mundane concerns. For example, while the initial concern triggered by the increasing use of ChatGPT by students caused schools and universities to consider the adequacy of existing assessment methods, these developments may also challenge us to reconsider what the meaning and goals of education should be and how they can best be achieved within educational settings.

On the one hand, such “generative” AI systems largely founded on large language models (LLMs) have reinvigorated old debates around ‘strong’ AI, rekindling dreams and fears of a general artificial intelligence or machines potentially capable of displaying all the skills and capacities which we deem intelligent when displayed by humans. While misleading opinions around the alleged sentience of such tools should be criticized and debunked (cf. Durt et al., 2023; Tiku, 2022), they point to philosophically and ethically interesting questions. One of these concerns the meanings of ‘meaning’ and ‘understanding’: While it seems clear that current LLMs do not understand but merely produce language patterns based upon probabilistic models, the question remains as to what else genuine understanding would require (Bender et al., 2021). Clearly, the most straightforward response would be that meaning should go beyond inner-linguistic patterns and that understanding also requires references to the world. However, given the likely combination of verbal and textual materials and the possible embodiment of these tools into other tools, such extra-linguistic semantic grounding seems to be only a matter of time (Lyre, 2020). If that is the case, would this satisfy the demands for understanding and if not, what would be missing? Moreover, even if such tools do not understand, what remains are the often strong impressions of understanding or even sentience that such systems induce in not only naïve but also expert users. Assuming for a moment that such debates are not mere public spectacles but reveal something about human cognition and emotion, these tendencies to anthropomorphize machines have ethically relevant consequences for the responsible design and usage of such technologies (Weizenbaum, 1966). Thus, while debates around strong AI often appear to oscillate between utopian dreams and dystopian nightmares, it may be time to return to more nuanced philosophical and ethical analyses.

On the other hand, developments in the context of so-called ‘weak’ AI, which are often overlooked in the public eye due to the hype surrounding the aforementioned generative AI technologies, raise many more mundane, yet no less pressing ethical and philosophical questions. Automated decision-making, crime prediction, facial recognition, or deep fakes (to list but a few technologies whose use is rapidly being normalized) and the increasing reliance on often opaque software systems raise a myriad of thorny issues. From those concerning surveillance and privacy, to justice and discrimination, and more fundamental questions about freedom and autonomy, the development and deployment of AI-powered information and communication technologies poses numerous ethical, social, and political challenges that warrant close scrutiny from diverse philosophical perspectives. Indeed, the proliferation of such technologies drives at the core of many widely shared values, disrupting moral norms and intuitions, and prompting not only critical reflection on the use of AI in contemporary life, but also inducing self-reflection on the value of these values in our lives. Are these values capable of offering the guidance and support needed to improve our lived experience in an increasingly AI-driven world? Do they assist in establishing the relationships and conditions for mutual flourishing? Are they capable of revealing and countering both existing and newly created power hierarchies? There is, as such, a need for identifying both the potential negative impact of AI-type technologies on the status quo, but also the opportunities they might provide for ameliorating our moral landscape.

Many of these issues are taken up and discussed in this topical collection, comprising selected submissions to the CEPE/IACAP Joint Conference, which took place from July 5–9, 2021, at the University of Hamburg. The overarching theme of the conference was the “Philosophy and Ethics of Artificial Intelligence,” thus focusing on a what in recent years has emerged as a ‘hot topic’ within philosophy and beyond but which has also been at the heart of research within the CEPE and IACAP communities for a long time. While CEPE, which stands for Computer Ethics: Philosophical Enquiry, is the bi-annual conference of INSEIT, the International Society for Ethics and Information Technology, IACAP is the International Association of Computing and Philosophy, which holds an annual conference. Since both the IACAP and CEPE communities have been investigating developments around artificial intelligence for decades, the “Philosophy and Ethics of Artificial Intelligence” served as an ideal topic to merge the two complementary perspectives of CEPE and IACAP. Over the course of five days, the conference encompassed over 30 sessions (with almost 90 talks on numerous topics), 5 panels, and 5 distinguished keynotes by Sabina Leonelli, Linnet Taylor, Helen Nissenbaum, Rafael Capurro, and Carissa Veliz. Following a separate call for full paper submissions, 13 articles—12 original research papers and one commentary—were selected for publication in this topical collection (see Table 1 Overview and Strands).

Table 1 Overview and strands

2 Overview of Contributions to the Topical Collection

The topical collection emerged from a full week of high-quality paper presentations at the CEPE/IACAP Joint Conference 2021 and the individual papers were chosen purely on the merit and originality of their respective arguments as well as their ability to advance the existing ethical and philosophical discourse on AI. Of course, the manuscripts were also subjected to the rigorous review process of Digital Society. As such, no attempt was made by the editors to favor papers that would fit a specific theme or narrative. Nevertheless, the fact that the collection does trace several interrelated strands of normative concerns is telling and, perhaps, indicative of the broader social moment caught up in this most recent ‘AI summer’. It evidences a sense of the socio-technical transformation at our feet, either to be embraced, resisted, harnessed, or redirected. More specifically, the collected papers explore, through both conceptual and technological investigations (Simon et al., 2020), the characteristics and suitability of an array of prominently discussed values in AI design, development, use, and regulation. In the process, several difficulties raised by AI technologies are illuminated that even call into question whether the turn to established values offers the best strategy for realizing ethical AI in the future. By identifying four interrelated strands of concern and providing brief outlines of the key arguments of each contribution, this section seeks to serve as a gateway and guide to the topical collection.

2.1 Strand I: On Democracy, Regulation, and (Public) Legitimation in an AI-Powered World

To start Pawelec (2022), whose paper was awarded the best student paper award, provides an in-depth case study wherein an increasingly deployed technology is argued to raise fundamental questions concerning the functioning of democracy. Specifically, Pawelec surveys and evaluates the potential impact of deepfake technologies—i.e., synthetic audio-visual media created using AI to emulate real people—which are rapidly improving in quality, persuasiveness, and accessibility, on democracy and democratic values. By drawing on both Warren’s (2017) problem-based democracy theory as well as elements of deliberative democracy theory, Pawelec stresses the negative impact of deepfakes on core democratic functions and norms, such as empowered inclusion, collective agenda and will formation, and collective decision-making. The paper encompasses an analysis of over 300 academic articles, media reports, and internet publications focused on deepfake-based disinformation and deepfake hate speech and concludes that existing possible applications of deepfake technologies can undermine public trust, spread disinformation, erode the epistemic quality of content relevant to collective decision-making, and even discourage certain social groups from participating in public life. Moreover, Pawelec argues that they undermine the fairness and integrity of politics and the associated electoral process by creating opportunities for targeted individual attacks on opponents (including by foreign interest groups) and through the “liar’s dividend,” which benefits from the epistemic murkiness produced by deepfakes. Finally, deepfakes also present challenges for news media by increasing the complexity of verifying content. While the paper acknowledges that further research is needed to assess the potential positive applications of deepfakes on democracy, it serves as an important foundation for reflecting on effective policy measures needed to mitigate the potential harmful effects of deepfake technologies.

Picking up on the question of governance, the article by Rosengrün (2022) provides a critical examination of existing attempts to regulate AI more generally, elucidating the potential threat such technologies pose to the rule of law (and, by extension, democracy). Building on Lessig’s (2006) work on the regulatory power of source code, Rosengrün argues that AI technologies weaken the rule of law by causing and stimulating a shift towards a “rule of code”. This denotes a state whereby the technologies (or rather their makers/owners) set the legislative and regulative agenda as well as influence economic markets and social norms. In short, the source code of private corporations occupies the driving seat in these domains that thereby bypass traditional democratic mechanisms, which are often seen as lagging or lacking in salient ways. This has, of course, recently been made plainly evident in the case of OpenAI’s ChatGPT and its impact on the EU’s draft legislative proposal for an Artificial Intelligence Act (Volpicelli, 2023). This lends further credence to the strong case made by Rosengrün that democratic societies need to regain control over their source code and digital infrastructure and thereby diminish the extent to which both governments and the general public are dependent on private entities—particularly GAFAM (Google, Amazon, Facebook, Apple, and Microsoft) companies. The paper closes with valuable recommendations on how democratic societies might proceed to do so, highlighting the need for digital infrastructures that stress open-source software, data parsimony, security, and net neutrality, as well as the strict enforcement of taxation law, competition law, and anti-trust regulation, and the prioritization of “technoliteracy” education.

Further adding to the discussion about what a good governance of algorithmic services should entail, the paper by Kera and Kalvas (2022) presents and compares findings of two research studies: the first an exploratory sandbox workshop where participants observed and reacted to the effects of new algorithmic services that caused discrimination in a simulated imaginary smart village; the second a longitudinal survey study on concerns and expectations regarding contact tracing apps during the first and second COVID-19 wave in June 2020 and April 2021. Despite functional and contextual differences between the hypothetical services in the sandbox environment and real-world Bluetooth-enabled contact tracing, both services present the challenge of being subject to new forms of “polite surveillance,” that is, a type of pervasive everyday surveillance that citizens accept in order to signal a civic duty or belongingness to a community. Consequently, when asking participants to evaluate different regulatory and design options, they expressed similar views and concerns: In both cases, the majority of participants (a) rejected the algorithmic services, (b) were skeptical about privacy-preserving solutions, and (c) expressed low trust in government oversight. At the same time, however, findings of the sandbox study indicate that public support for all technical and non-technical interventions grows when people become actively involved in the design and regulation of new services and audits are carried out by independent stakeholders rather than government institutions. Expanding on this, the authors identify a demand for “no algorithmization without representation” and warn that approaches that seek to regulate algorithmic services ex post or ex ante through aspirational frameworks and guidelines overlook the issue of political participation as a means of establishing legitimacy in any social and technological process.

In a commentary to Kera and Kalvas’ paper, Ballsun-Stanton (2022) suggests that the claim of “no algorithmization without representation” is a rediscovery of participatory design within the context of the serious games movement and discusses the merits and problems of such an approach.

2.2 Strand II: On the Challenge of Protecting Privacy in Today’s Data Economy

Moving from broader debates about AI governance to engagements with a particular value—privacy—Carter (2022) explores alternatives to privacy notices that in the post-GDPR landscape have become ever more prevalent on phones and online. Such notices, she argues, aim to elicit informed consent from users but are not well suited to help them make informed decisions. Rather, they exploit cognitive biases—from framing effects to inertia bias to conjunction fallacies—and coax users into making choices that service providers want them to make. But how could privacy notices be repurposed to create a space for more meaningful user decisions that accurately reflect their personal values? In exploring how one could design for such value-centered privacy decisions, Carter draws on Killmister’s (2017) Four-Dimensional Theory of Self-Governance (4DT), which focuses on personal autonomy and is able to account for the “apathetic user” phenomenon that occurs when users become so overwhelmed by privacy notices that they simply no longer care and always consent to a service’s data collection practices. Using the 4DT lens, Carter then assesses how current personalized privacy assistants (PPAs) use dynamic and selective notices in a way that allows for value-centered decision-making, paying particular attention to a number of remaining challenges. The insights from the PPA assessment are then used to inform the conceptualization of a value-centered privacy assistant (VcPA) for smartphone app selection. One important suggestion is that VcPA profiles could be based on the user’s personal values rather than privacy preferences alone. This, however, would require user tests to determine in what ways values intersect with app data collection preferences and how this intersection could be operationalized as VcPA profiles.

In a fruitful blend of technical and philosophical analysis, the paper by Adomaitis and Oak (2023) adopts an altogether different approach for dealing with related privacy concerns by considering the ethical implications of using adversarial machine learning as an obfuscation technique to forcefully opt-out from unwanted data processing. They start their investigation by pointing to Clearview AI, a software company that scraped billions of images from social media sites to build facial recognition models without people’s knowledge or consent and argue that individuals often find themselves at a clear power disadvantage when seeking to safeguard their privacy and autonomy online. One way to counterbalance this power asymmetry is to employ obfuscation techniques. In the case of facial recognition, such obfuscation could work by feeding the algorithm poisoned images, which will teach the model a distorted version of the individual and lead to a misclassification of a live, unaltered image of that person. But can such an adversarial attack be ethically justified given that by poisoning the data the overall performance of the machine learning model will degrade, leading to incorrect predictions and potentially harming other users? Comparing the facial recognition case with the use of similar obfuscation techniques in the medical sector, the authors show that the ethical evaluation of such adversarial attacks may depend on the individual case and context: While the way Clearview AI obtained and used social media images seems to justify a privacy-motivated adversarial response, the poisoning of legitimately collected and employed medical data is clearly much more problematic given collateral damage considerations and the potential risks to human safety.

Against the background of intense debate about the digitalization of the social welfare sector, Schneider (2022) examines the process of case recording in day-to-day social work practice. More specifically, based on a series of interviews with experts working for welfare agencies or social service providers, the paper explores the kinds of information that is recorded in client information systems (CIS), the kinds of information that is not, and the reasons why certain information is withheld from digital records. As Schneider emphasizes, while social welfare agencies may have well-established procedures and routines for case recording, it is ultimately left to the discretion of individual professionals to decide what to include and what to omit in client records. The paper finds that there are several reasons for why information is being excluded, ranging from technical limitations associated with both the hardware and software used to store records to profession-specific ethics and privacy considerations to discrepancies between what different practitioners deem relevant. Two important inferences are drawn from these findings. First, and in view of current debates about the use of AI applications in the welfare sector, it is argued that bias does not only begin with data mining or analysis but is already embedded in the decisions about what to record. Raising awareness of this among those who further process the recorded information seems paramount. Second, and in line with the first point, the findings also show that the challenge of ensuring privacy does not only arise during data analysis but is already a concern during case recording. According to Schneider, it would thus be important to not exclusively focus on the ethics of design but also on the ethics of usage of AI.

Lastly in this strand, and directly grappling with the myriad tensions built into existing privacy practices and attempts to protect privacy, the contribution by Puri (2023) offers a valuable and novel expansion to the well-established debate concerning digital privacy. Specifically, it introduces the concept of “Mutual Privacy” as a “group right,” and therefore stands in contrast to the dominant discourse surrounding individual rights to privacy (even in cases of collective privacy concerns). Puri argues that contemporary privacy challenges like group profiling and mass surveillance are not adequately addressed by existing individual-oriented accounts. Mutual Privacy, however, emphasizes an array of shared interests (i.e., genetic, social, and democratic interests) and vulnerabilities (e.g., to algorithmic grouping) that arise from our interconnected existence, where privacy violations cause and entail both individual and collective harms. Taken together, these prompt a focus on the group aspects and the social value of privacy such that Mutual Privacy emerges as an “aggregate shared participatory public good” to be protected as a group right. Such a right recognizes that privacy is both personal and collective and that its value, the interests it serves, and the harms it seeks to mitigate, are tied up in this interrelation. As such, it is portrayed as complementing individual privacy in that it serves to both enhance and safeguard it. The paper therefore exposes the limits of regulatory approaches seeking to secure privacy at the individual level alone—by, for example, seeking to empower individual decision-making concerning privacy matters—and thereby highlights a need to shift the focus of the privacy regulatory landscape from individual to collective concerns. It is down this path, Puri argues, that we must walk if we are to meaningfully address the contemporary challenges of privacy protection.

2.3 Strand III: On Solidarity, Inclusivity, and Responsibility in AI Design

In the topical collection’s third strand, the contribution by Rudschies (2023) turns to the relatively underexplored “principle of solidarity” found in several AI ethics guidelines. Solidarity, she elucidates, is a multifaceted concept frequently associated with cooperation, interdependence, pro-social behavior, inclusion, and collective well-being. The paper disentangles these myriad usages evident in a variety of academic disciplines and proposes a helpful “discipline-neutral account” wherein solidarity is comprised of the following five elements: relationality (it exists only within groups), a connecting element that grounds a grouping relationship (something shared that helps identify the group), an awareness and recognition of the relationship (i.e., a cognitive element), a motivational source through which individuals are invested in the group, and a sense of communal duty that generates a readiness to contribute. Rudschies then seeks to put this account to work by considering what it might do in the context of Ethics in Design, where methods such as user-centric or participatory design are discussed as potential enablers for adopting a solidarity perspective in AI design and development. On this account, Rudschies argues that a solidarity-attuned Ethics in Design approach should seek to assess potential human rights impacts, identify stakeholder interests and needs, and ensure just distribution of benefits and burdens. However, in an admirably balanced way, the paper also demonstrates that applying solidarity to AI technology design faces significant challenges due to the distinct difficulties in defining and recognizing the community involved, satisfying the cognitive dimension of solidarity, and the competitive nature of market-based societies that may hinder the development of the kind of civic-mindedness necessary for solidarity. Nevertheless, the paper maintains that solidarity remains an important perspective that might guide ethical decision-making in technology design; one capable of generating practical recommendations for technologists developing AI systems.

Also dealing with questions of ethically informed technology design, Elder (2023) examines the potential of caregiving robots to assist in geriatric care. More specifically, she considers how such robots should be designed when dealing with elders who exhibit racist, sexist, abusive, aggressive, or impulsive behavior. Drawing on Confucian ethics, and in particular the concept of filial piety, she points to the necessity of reconciling two conflicting desiderata: to value and care for seniors, but also to clear-mindedly deal with their moral shortcomings. One key insight from Confucian accounts of filial piety is the articulation of a duty to remonstrate with elders when they err or engage in moral wrongdoing. Such interventions, however, are not always welcomed, potentially leading to tensions between the caregiver, be that a family member or a professional, and the senior. But while Confucius emphasizes the obligation to remonstrate with elders when they go astray, he also contends that making oneself a target of rude or abusive behavior is not what filial piety calls for and may in fact reinforce such behavior by providing such seniors with a handy outlet. From a Confucian perspective, it is thus conceivable that therapeutic robotic interventions could support intergenerational caregiving work in several ways: On the one hand, robots could be designed to promote moral uprightness by both exhibiting respectful and polite behavior themselves and by issuing verbal rebukes during rude interactions. On the other hand, they could help caregivers care for themselves by providing some much-needed distance and respite, which may ultimately protect the caregiver’s affective ties to the patient and promote the well-being of both.

Pondering questions of both inclusivity and responsibility, Oluoch et al. (2022) take a closer look at how advancements in telecommunications and geoinformation science have transformed cartographic practice. Following a brief genealogy of cartography as a discipline and instrument of state power, they examine how AI-assisted remote sensing (RS) techniques, Internet connectivity, and ever larger quantities of easily accessible spatial data have led to a democratization of map production, enabling NGOs, private companies, and research institutions to engage in digital mapping for a variety of purposes. The authors stress, however, that this shift in purpose and ownership is not without its challenges as there are a number of technical, social, and ethical aspects to consider. Taking the use of AI-assisted RS techniques for the algorithmic classification of deprived urban areas in the Global South as an example, they highlight several areas of concern: On the one hand, there appears to be a gap between the promise of AI-assisted mapping to improve the wellbeing of communities in deprived urban areas and the capability of these communities to participate in mapping efforts, which can (a) reduce the accuracy of the maps as input from those on the ground is missing and (b) undermine the utilization of the maps as local authorities may not be sufficiently familiar with the product. On the other hand, spatial data can detrimentally affect the lives of those being mapped, for instance when maps are used by authorities to identify unwanted settlements, which illustrates the responsibility of those who control spatial data to those who are represented in the data. The authors conclude that while there is great technical potential in the use of AI techniques for deprived area mapping, there are a range of ethical aspects that need to be brought into greater consideration in the mapping practice and geoinformation literature.

Concluding the third strand, Wolf et al. (2022) contemplate how to treat the online content produced during a person’s digital life after the person has physically died. More specifically, the authors approach the subject of digital remains from the perspective of computing professionals who must make practical decisions about whether to preserve or delete data generated by a human who is now dead. They show that while different theories of personhood and identity raise important questions when considering the significance of digital remains, such accounts offer only little—or impractical—guidance on how computing professionals should satisfy their professional responsibilities. This critique of the current state of scholarship is followed by practical advice: First, the authors present three scenarios that illustrate important differences in how digital remains should be dealt with in different contexts. Second, they suggest a more fine-grained categorization of digital remains that distinguishes between abandoned artifacts tied to humans and abandoned artifacts tied to artificial agents. Third, they suggest that many ethical concerns could be mitigated if systems would require users to specify the treatment of digital remains already at account creation. Fourth and finally, they propose the principle that absent any other overriding previous arrangements, digital remains ought to be deleted—a standard that will result in some digital remains being lost, but which is both practical and defensible on privacy and environmental grounds.

2.4 Strand IV: Reconsidering AI Ethics

Finally, the contribution by van Maanen (2022) takes a step back, embraces a meta-perspective, and asks us to critically reflect on the role of AI ethics in contemporary technological development and the state of ethics evident in today’s tech industry. Specifically, van Maanen argues that the current trend of developing ethical guidelines and establishing ethics committees, but also the turn to the dominant ethical theories of Western philosophy in search of standardizable ethical principles, falls victim to the same shortcoming: namely easily being roped into the practice of “ethics washing,” whereby published principles or guidelines—which are often neither adhered to nor meaningfully enforced—can be used strategically to diminish ethical concerns over the practices of a given company. In short, van Maanen holds that existing practices speak against a genuine interest in the value of ethics. To address this, van Maanen argues that there is a need to politicize data ethics and, drawing on the work of political philosopher Geuss (2008), proposes a question-based ethical data practice. The questions at the center of this approach are constructed to reveal morally salient features of technological innovation and practice that necessitates active inquiry over box-checking. As such, the proposal emphasizes the need to understand technological development as a socio-political phenomenon which warrants a context-sensitive, historically informed reformulation of ethics. In so doing, van Maanen calls for a diversified ethical approach that encourages ethicists to reflect on their positioning and consider the implications of their work, especially in relation to the tech industry. In conclusion, he suggests that the term “ethics” may not be appropriate for this politicized ethics and instead advocates for the notion of “data politics” to promote more practice and politics-oriented academic research.

3 Conclusion

Despite the recent explosion in academic attention on artificial intelligence generated by the current ‘hype’ surrounding various AI technologies, it is evident that the philosophy and ethics of AI is still a lively and fruitful space, with numerous meaningful and innovative contributions—many captured in this topical collection—emerging from it. Indeed, the contributions collected here unequivocally highlight both the possibility and need for adding nuance to ongoing discourses and reflecting on the current socio-technical moment, while also demonstrating that there are still new lines of inquiry to be explored. They do, as such, serve as a call to action to revisit, and if needed correct, old roads or else to boldly map out new ones. Yet even as emerging AI applications—in particular in the context of so-called “generative AI”—send notable ripples across many of our social spaces, there are those who are underwhelmed by the current state of computer innovation, lamenting the loss of sheen (Vallor, 2022). Accordingly, this particular ‘AI summer’ seems to be caught between frenzied talk of revolution, genuine dismay, and bored dismissal. This iteration, however, is also marked by features that might provide some measure of hope. Two of these became plainly evident while organizing and attending the conference that birthed this topical collection.

First and anecdotally, there appears to have been a meaningful shift in the extent to which ethics enters the psyche of soon-to-be computer engineers. Far from being a non-feature or an onerous requirement to pay lip-service to, there seems to be a growing recognition of the need for ethical scrutiny to guide research and development from the outset and throughout the life cycle of any product in question. It strikes us that this is no longer ‘news’ to the current cohort of would-be developers nor an unwanted distraction. Perhaps others who also teach computer science students can second this. Second, and buttressing the first observation, is the growing inter- and trans-disciplinarity of the relevant communities—such as those found in CEPE/IACAP—exploring these topics. Even as this necessitates a period of co-learning or reciprocal knowledge exchange (and the meaningful intellectual ‘battles’ that this will unavoidably entail), the continued letting down of disciplinary guards for good faith collaborative inquiry can only be to our collective benefit.

Indeed, it may be our best remedy to a further ailment that seems to be widely shared: the worry that as a discipline we are not only not doing enough or with sufficient pace, but that it is folly to think that our academic work can ever keep abreast of innovation and her brutal taskmaster. The above points speak to the possibility of doing precisely that by bringing technologists and philosophers into active discourse. In doing so, philosophical scrutiny may not only find a mutually informative inside track but potentially also an increasingly welcoming landing perch. Provided, of course, that continued efforts are made to facilitate such a coming together. Moreover, it requires that a middle space between utopian and dystopian or techno-optimistic and techno-pessimistic considerations of AI technologies be located—and the meaningful exchanges such opposing perspective dispel. That is, to soberly adopt what Hughes and Eisikovits (2022) in a different context have labelled a “technorealist” position that tacitly accepts (as seemingly intractable) the place of technology in our social fabric. Yet seeks to harness this vast existent innovative impulse and its established apparatus and, through sustained sensitivity towards the diverse individual and social experiences of technology, aim research and development down more collectively desirable paths. What are the ethical, conceptual, and institutional foundations of such a project? Who is to take up this mantel and how might it be as inclusive as possible? How will it be sustained, maintained, or improved? These and other vital lines of inquiry beckon and necessitate the contributions of the broadest epistemic collective that can be rallied.