Our Digital Mirror

The digital world has a strong tendency to let everything in its realm appear as resources. This includes digital public discourse and its main creators, humans. In the digital realm, humans constitute the economic end and at the same time provide the means to ful ﬁ ll that end. A good example is the case of online public discourse. It exempli ﬁ es a range of challenges from user abuse to amassment of power, dif ﬁ culties in regulation, and algorithmic decision-making. At its root lies the untamed perception of humans as economic and information resources. In this way, digital technology provides us with a mirror that shows a side of what we are as humans. It also provides a starting point to discuss such questions as who would we like to be – including digitally, which purpose should we pursue, and how can we live the digital good life? For Antoine de Saint-Exupery (1939), airplanes can become a tool for knowledge and for human self-knowledge. The same is true for digital technologies. We can use the digital as a mirror that re ﬂ ects an image of what we are as humans. And when we look closely, it can become an opportunity to shape who we are in ways that make us more attractive.

However, regarding digital humanism, a program of only technical advancement would fall short of its real ambition. Although technologically driven and proceeding with the intention of improving technology, digital humanism more often problematizes science-technology relationships, human attitudes, and fundamental concepts that we have come to take for granted. In digital humanism, the digital becomes the mirror that impresses a picture of who we are on ourselves and lets us realize what we should changeabout us, about technology, and about the lives that we live.
The pervasiveness and ubiquity of information technology means that just about everything that is subject to digitization is also subject to its transformational powers. Central concepts of our livesboth private and professionalhave become transformed when they seemingly were only digitized. Collaboration is now something happening in shared directories, meeting new people is mediated by apps and preference settings, news has become pop-up windows, and museums curate data. Intelligence is something that machines exhibit, and chess has been turned into a formal calculation task rather than a board game since around the 1970s. It is as if we had to reinvent the whole world in the digital again and examine its values and implications for our lives. A good example is the case of public discourse, certainly also a feat of the new digital sphere.

The Example of Online Discourse
The advent of the internet significantly impacted the way we speak in public. From early newsgroups to today's online social networks, this development deserves its own, more thorough investigation. Today, public discourse without the digital has become quite unthinkable. At the same time, the phenomenon of online discourse is now a major concern of policy makers as well as of critical thinkers, researchers, and many citizens. Its shortcomings, from fake news to echo chambers, from foreign political influence to the pervasion of illegal content, are blamed on its digital form, hence, on technology. Some key challenges include the following: -Platforms exploit discourse to drive user behavior. They can prioritize emotional content over facts, nudge users into staying online, and have become viable ways to influence user behavior including political decisions. -Algorithms supervise and police user-contributed online content with the aim to detect illegal matter, spot infringements of intellectual property, remove what may be considered harmful, etc. -There is a massive shift of power over discourse control from traditional rulers of public discourse, such as media, politicians, and thinkers, to digital platforms. -Discourse in online platforms has proven enormously difficult to regulate by any single country. The only exceptions are through massive investments in surveillance, censorship, and severe limitations of freedom of expression, for example, in China.
-User-generated discourse provides platforms with large amounts of data to learn, build models, predict behavior, and generate profit in various ways including targeted advertising based on behavioral prediction.
These challenges are by no means unique to online public discourse. We find massive shifts of power toward platforms throughout the world of internet business; platforms have generally proven difficult to regulatenot just regarding public discourse; algorithmic decision-making affecting humans happens through a broad range of applications; harvesting data from all sorts of electronic devices lies at the root of surveillance capitalism; and luring users to stay online through emotional targeting happens across a range of online media today. Online public discourse is really but one example, albeit one that is pervasive throughout societies all over the world.
Beyond the listed concerns, digital online discourse seems to affect members of societies in their sense of belonging. The individualized nature of person-targeted discourse, its character of entertainment, and the self-fulfilling quality of opinionated content sharing and creation have severely undermined shared views, collective narratives, and communal perception. It has been suggested that digital discourse only involves a "simulated" public. Earlier analyses of digital culture focused more on the creation of new communities as well as on referentiality and algorithmicity (Stalder 2016) as ways of creating a shared understanding. Today however, discourse moderation algorithms reinforce an individualized monologue in which references serve to propagate an individual's opinions and lead to the oftendiagnosed fragmentation of society. Discourse in the digital world thus runs the risk of endangering the common good not necessarily because of attacking any good specifically but because of undermining the concept of the commons. It limits what is shared among people and thus what contributes to forming a societal collective. This is yet another case of digital technologies not only changing human behavior but changing the very essence of key concepts in often unexpected and unpredictable ways.
A recurring topic in digital humanism is that of primacy of agency or who shapes whom: is technology shaping humans or should technologies be designed in accordance with human needs and values? Unfortunately, matters in digital technologies are never so simple, and there is mutual influence of the two spheres. In digital humanism, this phenomenon has been called co-evolution. When co-evolution affects basic concepts, such as discourse, it seems futile to repair these fundamental concept drifts and the challenges they create only by mending technologies. Much beyond co-evolution, there is a deeper, more philosophical question to ask. It concerns a matter of choice and decision: How do we want to be as humans? It concerns ethical choices about the good life digital as much as it concerns the design of our technosphere. To stay with the example of online discourse, the digital then poses the question of what type of discourse do we want, or perhaps more ontologically, what should discourse be? Some legislators and platform owners, for example, suggest using algorithms for improving online discourse. The idea is that artificial intelligence removes illegal content, and many seem to suggest that any content that is potentially harmful should be removed. While the former is usually defined in legal texts and practice, the latter is typically ill-defined and lies at the networks' discretion. The ensuing discussions of democratic parliaments and nongovernment think-tanks then concern freedom of expression as a basic or human right, censorship, regulation, etc. (Cowls et al. 2020). While these are important and difficult discussions, a more essential line of thinking is required, namely, the question of what should the essential qualities of online discourse be? It is another typical characteristic of digital technologies that we can rarely do away with them once they have been rolled out. We therefore need to have productive, forward-looking discussions. This can include a debate about how much "harm" a discourse may have to include to be productive, to stimulate, or to provoke. We need to discuss not only formal qualities of discourse, but what should its purpose be, who should partake, and whom should it serve?

Scaffolding Discourse
The reasons for challenges of digital technologies do not exclusively root in the fact that they are digital as opposed to analogue, nor do they lie in their ubiquitous nature and the ease with which digital technologies can manage large numbers. The challenges root in how they affect our basic conceptions of the world. Although the technical characteristics are important, there currently is an unprecedented scale of how the digital facilitates commercial gains of a specific character. We mentioned how online discourse provides a basis of targeted advertising, of data harvesting, and for the construction of predictive behavioral models. This exploitation of online discourse lets discussions appear as a resource in the digital sphere. The digital (platform) perspective thus regards human language from the standpoint of observability and predictability. The resulting digital online sphere consists of (mostly) humans that provide linguistic resources and their online presence and of businesses requiring that humans need to be predicted and targeted in advertising. In this discourse, humans become a resource in the digital realm.
Such a resource-focused perspective is not unique to digital technology. As early as 1954, Heidegger suggested that this specific way of letting everything appear as a resource lies in the very nature of modern technology (Heidegger 1954). In his terminology, technology lets everything become part of an enframing ("Ge-stell") as resource ("Bestand"). Heidegger uses the example of the river that appears as a power source once we start building electricity stations. Digitization not only provides such enframing for various objects and phenomena in our environment; it additionally and much more than previous, older technologies enframes us as humans. It is perplexing that in the digital realm, the human is both the source and the sink. Humans constitute the economic end and at the same time provide the means to fulfill that end. Humans stand reserve to the extent that they are simply either data or money generators. From an economic viewpoint, Zuboff (2019) identified a similar concept drift underlying surveillance capitalism. It is a strong and impactful understanding of humans driven by the commercial online world. It is commercially attractive and promising with its double take on the human as a resource.
Engineering may always imply "a certain conception of the human" (Doridot 2008), but we have choices. For example, not all public discourse needs to take such an instrumentalist turn. Like other human activities, we can choose the purpose of our speaking. Some forms of public online discussions are designed to facilitate dialogues among groups of highly engaged speakers of a local community (e.g., MIT's Local Voices Networkhttps:lvn.org). Others are solution-and goal-oriented and live a practice of focused contributions (e.g., the business and employmentoriented network LinkedIn). Such examples suggest that there are ways to facilitate online discourse less prone to filter bubbles, echo chambers, fake news, etc. and perhaps even in business settings. It also shows how purposes can be designed in line with human needs; in fact, purposes are entirely human made.
We may still have to focus on technology, to occasionally retreat from social media, reform its way of working, and exert restraint as Deibert (2020) suggests. However, realizing that some types of online discourse emerge from the instrumentalization of users, turning them into targets and exploiting them as resources, means to understand not just technology, but ourselves. Technology then is the mirror that presents us with an image of ourselves that we may not find entirely attractive. We can also find possible relief in Heidegger, who quotes Hölderlin: "But where danger is, grows the saving power also." This suggests that the enframing also reveals truth upon which we can build if we want to overcome present danger. And lifted back to the level of digital humanism, the questions then become who would we like to include digitally, which purpose do we pursue, and how can we live the good digital life? Finally, we will also have to find good answers to the question who is "we"?
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.