Keywords

Introduction

At the beginning of the 2000s, international institutions such as the European Union invested heavily in artificial intelligence (AI; [1]). Academic studies of AI started appearing within the decade. Later on, researchers began studying the implications of AI for quality of life (QoL). They focused on individuals’ attitudes and motivations regarding the use of AI, as well as the behaviors, and practices of AI use (or non-use) they engaged in. Studies such as those carried out by Wright [1] and Wright et al. [2] quickly drew attention to the potential benefits and drawbacks of AI use, discussing its potential for fostering economic growth, convenience, security, and individual and social safety, as well as growing concerns over reduced privacy, profiling, surveillance, spamming, and identity theft or fraud, among other issues. In relation to freedom, a key factor in QoL, the emergence of AI has coincided with the rapid development of the Internet and associated digital technologies, effectively erasing the physical borders between individuals and granting them new freedoms. During this time, various questions around the issue of AI have emerged, including ones related to the quantification of “thought” and the protection of the individual online.

Non-academic literature has often discussed AI and its implications for QoL, since technological developments over the past decades have created new needs for the average individual, while researchers have continued to develop new applications of AI to fill those needs. For example, Pew Research Center,Footnote 1 a non-partisan fact tank, has revealed several benefits and risks of AI use while stimulating open discussions about AI, the Internet, and the future of social relations and humanity, including the visions of the millennial generation, and the disruption of established business models.

At present, there are still gaps in the research concerning the social impact of AI’s emergence, particularly regarding individuals’ attitudes and beliefs about AI and its actual use. There are vast differences between the way AI use is popularly imagined and how it is really used, and these gaps only widen when one considers future trends. This chapter aims to bridge this gap by presenting the results of a survey of 1000 university students from Western Switzerland regarding how they imagine the future of AI use and its implications for technologies and QoL, especially in relation to personal security and safety.

After a brief outline of the state of the art of AI, we present a tool called “Futurescaper” designed to animate collective reflection and foresight. We then present the methodology that was used to collect data in this study, along with the four scenarios for the future developed using the Futurescaper tool according to students’ responses, which are called, respectively, “Al for the best,” “AI down,” “AI for business,” and “AI freeze.” These scenarios may serve as starting points to enrich discussions on AI and QoL. It should be noted that in this study, we do not consider questions concerning wealth, employment, the environment, physical and mental health, education, recreation and leisure time, social belonging, or religious beliefs that factor into QoL. Instead, the study focuses on questions on safety and security in relation to QoL.

State of the Art of AI and Questions for the Future

Artificial intelligence (AI) refers to a quality of systems that rely on datasets typically originating from a range of connected objects (e.g., computers, tablets, smartwatches, and connected wristbands) and that are individually and collectively capable of quickly analyzing data implementing search, pattern-recognition, learning, planning, and induction processes. Artificial intelligence has considerable power [3] and enables individuals to access information more simply and efficiently, as “the proactivity of the environment lightens the cognitive load that the user currently has to deal with to access information via computers” [4, p. 482].

The development of AI is paving the way for a large number of innovative, highly responsive, personalized applications and services. However, it is also a significant risk driver, particular in the areas of personal security and safety. Artificial intelligence is a factor in QoL, due in large part to the emotional responses it provokes (e.g., safety/lack of safety, security/insecurity). Given the threats presented by hackers and other malicious actors, the protection of sensitive personal data in an environment of widespread AI use is essential and must be an object of constant attention for both service providers and the users themselves. Recent scandals indicate the kinds of AI security and safety threats that may become increasingly prevalent in the future. For example, the cyber-attack launched in May 2017 via the “Wannacry” virus infected more than 200,000 computers belonging to individuals in over 150 countries [5]. Given these risks, the consequences of AI are a major concern for individuals that is likely to affect QoL.

In protecting against threats to private life and overall safety in the context of cybertechnologies, individual users must assume primary responsibility. As the domain of personal AI technologies continues to grow, individuals must continually increase their digital literacy and vigilance against risk. In 2017, in its biannual report on information security, the Swiss Confederation stated in an inventory carried out at the international level that it expected the number of connected objects to grow to 20 billion globally in 2030 (compared the 6 billion connected objects that existed when the report was issued; [6]). The report also indicated that individuals’ vulnerability was “due to the inappropriate safety culture” that predominated in information security practices [6, p. 8], including the behaviors of service providers. The report demonstrates that cybersecurity ultimately depends on the actions of and interactions between several cultures of data protection. The aim should be to define appropriate individual and collective practices that would allow the development of a “good” safety culture [7]. Developing shared values, norms, and symbols with respect to security requires mobilizing not only experts but citizens at large as well, the latter of which are the first vectors of risk and have a genuine role to play in developing cultures of safety.

To properly mobilize individual citizens, an analysis of their actual behaviors and practices with regards to AI is highly needed as an initial step. Once these current behaviors and practices are understood, developing awareness and vigilance should be key areas of focus to allow individuals to ensure their safety and security and achieve better QoL outcomes as a result. An individual’s perception of security and safety is a central point that must be understood in order to transform behaviors and develop prevention practices. The use of foresight processes is a promising technique for facilitating such transformation, because it helps individuals and collectives consider systems of action [8] as a whole (i.e., political, economic, socio-cultural, technological, ecological, and legal [PESTEL] megatrends). Additionally, foresight processes consider weak signals (i.e., areas where information is incomplete or lacking) and, while encouraging exploration of alternative future scenarios, potentially improve participants’ decision-making processes and actions. Rather than anticipating foreseeable and unalterable changes and better prepare for them (by being pre-active), foresight processes encourage individuals to act in order to bring on a desirable future (by being pro-active; [9]). This distinction between the (pre-)acceptance of changes (after the fact) and proactive contribution to change is important. When proactive actions are identified and perceived by a collective, individuals feel more motivated and responsible because they take the future upon themselves instead of simply dealing with the changes. In addition, in the case of AI, individuals are likely to be more involved in developing preventive actions to the extent that they feel safe and secure in their own use of AI.

Considering the need for research regarding AI use and attitudes towards AI among individuals, as well as the potential for collective foresight processes as a tool to drive individuals’ future actions on AI, this chapter seeks to answer the following questions: What attitudes and motivations characterize individuals’ current AI use, and what behavior and practices of AI use (or non-use) do they engage in? What future scenarios for AI are possible in Switzerland? How will companies, administrations, and the public sector in Switzerland be affected by developments in AI? How can individuals in Switzerland seize opportunities to improve security and safety and respond to AI-related threats? To answer these questions, we carried out a study between September 2016 and January 2017 involving 1000 university students in Western Switzerland. This chapter reports the results of the study and discusses the implications of these results for efforts to ensure individual users’ cyber-security and safety in the midst of technological developments and improve QoL in the long term.

Methods: Futurescaper, Animating Collective Reflection and Foresight

According to some researchers, foresight consists of developing plausible scenarios for the future that shed light on imminent decisions and actions [10]. The development of such scenarios facilitates individuals’ strategic actions in an uncertain environment [11]. Change is a central concern of foresight, and the scenarios developed through foresight explore new opportunities resulting from changes in a systemic environment [12].

In this study, we used an online platform called “Futurescaper” (Futurescaper.com) to encourage forward-looking reflections and operationalize foresight processes. Futurescaper is a collaborative tool that supports the imagining of future solutions by a group of participants, each of whom can provide significant contributions and insights into specific situations. Tools for foresight processes like Futurescaper helps individual users describe and analyze their prospective actions in the context of a collectively generated future. Several similar tools also exist, such as Scenario Management International (ScMI.de), 4strat (4strat.com), FIBRES (fibresonline.com), and the Futures Platform (futuresplatform.com).Footnote 2 Since we had the opportunity to use Futurescaper for in-class learning in a course on foresight with bachelor’s- and master’s-level students in Western Switzerland, this chapter focuses on this tool in particular.

Futurescaper’s contributions are qualitative (Fig. 18.1), as it helps groups of individuals collectively imagine the change factors influencing potential futures, the consequences of these change factors, and the consequences of these consequences. Futurescaper facilitates collective reflection by all the individuals within a group, allowing participants to propose new combinations of change factors and their corresponding future scenarios. In this study, the new scenarios included opportunities to be seized relating to the development of AI and security and safety threats to be avoided. Given these opportunities and threats, participants can articulate the attitudes they should adopt in order to change or pursue the imagined future scenario. Participants verbalize courses of action, taking into account change factors that other individuals in the group might not have identified. In this way, Futurescaper is a future scenario-building tool that encourages proactive action.

Fig. 18.1
figure 1

Contributions of Futurescaper and similar future scenario platforms

The course on foresight in which the present study was conducted took place between September 2016 and January 2017 in Western Switzerland. During this course, 1000 bachelor’s and master’s students participated in a crowdsourced foresight exercise. The students came from the fields of economics and management, engineering, art, and design. A facilitator oversaw the overall process of the exercise, including configuring the platform and encouraging the participants to brainstorm the scenarios with multiple change factors.

Implementing Futurescaper involves the following four-step process (as shown in Table 18.1): project preparation (step 1), stakeholder engagement (step 2), interpretation and analysis (step 3), and the restitution and presentation of the results (step 4). As considerations concerning the general environment (e.g., PESTEL factors) are the starting point of any definition of plausible future scenarios, such considerations are a key factor in the project preparation step [10]. Table 18.1 presents a brief description of the actions carried out for each step of the process in the present study, as well as the results of each step.

Table 18.1 Steps of Futurescaper approach to future scenario building

To prepare the project (step 1), the facilitator predefined 52 change factors that were derived through exploratory research and collection of data concerning AI. The facilitator used newspaper articles, scientific and professional studies, blog posts, and other documents to provide quality starting content. The facilitator then held workshops with students (step 2), each of which began with students viewing an introductory video on the current state of AI in Switzerland. In total, 25 workshops (with an average of 40 students per workshop) were conducted, all of them structured the same way, starting with an introduction to AI, then a presentation of the survey (see Table 18.2 for survey questions), followed by the students taking the survey and a wrap-up. To stimulate students’ initial thoughts and provoke discussion, the facilitator presented three recent examples of AI in practice (autonomous vehicles, smartwatches, and drones) at each workshop and compiled students’ reactions. At each workshop, the facilitator also led a discussion among the students to elicit insights regarding their knowledge of AI, their attitudes and motivations regarding it, and the behaviors and practices of AI use (or non-use) they engage in. The process of engaging the students in step 2 was highly beneficial, leading to the identification of 2193 crowdsourced change factors.

Table 18.2 Survey questions

Step 3 of implementing Futurescaper involves interpreting the data collected. In this case, the 2193 crowdsourced change factors were reduced to 337 merged change factors, which we then analyzed in order to identify interrelationships and possible axes along which to consider distinct scenarios. The identification and analysis of the merged crowdsourced change factors during step 3 supported the generation of four contrasted and plausible future scenarios for AI with respect to personal security and safety. We defined two prime axes (see Fig. 18.2) to represent these four future scenarios. We then developed 564 proposals directed towards businesses, governments, and individuals regarding actions that could be taken to seize opportunities and protect against threats related to the use of AI. By outlining this study in the present chapter, we carry out step 4, the final step of the process, which is to present the results to the international academic community. We hope to also present these results in a peer-reviewed article in the near future.

Fig. 18.2
figure 2

Possible scenarios for the futures of AI in Switzerland

Results

In this section, we present the results of the study described above regarding possible futures for AI in Switzerland. In the presentation of these future scenarios, we consider the implications of future technologies for the security and safety of users and the possibility of such technologies leading to improvements to QoL in the long term. As mentioned in the previous section, our crowdsourced foresight approach led to the identification of 337 change factors. These allowed for the elaboration of the four scenarios discussed below (see Fig. 18.2): “Al for the best,” “AI down,” “AI for business,” and “AI freeze.” We named each scenario according to the descriptions provided by the participants of the study. Each of these four scenarios presents opportunities related to AI that businesses and governments might seize in the present to improve long-term QoL, as well as security and safety threats that individuals can proactively address. As mentioned above, the scenarios were defined along two main axes, which were based on two composite change factors whose future outcomes were deemed the least certain due to their high degree of dependence on social and technological developments. These two change factors are individual datafication and interest in connected objects. Datafication consists in moving from data to useful information: “insurance companies can, for example, use data relating to the movements of the vehicles of their policyholders in order to establish contracts as close as possible to the risks actually presented by their customers (and no longer contracts taking into account their age, their sex, and their driving history)” [13, p. 95]. Datafication refers to the quantification of various aspects of our daily lives in the form of data and information that can be used for diverse purposes [14]. High levels of individual datafication means that large amounts of information are extracted from individuals’ data via everyday devices for potential use by companies, governments, or the individual themselves. In a society with low levels of individual datafication, the amount of data extracted from individuals, or the amount of useful information derived regarding individuals’ behaviors, will be lower than it is today, either as a result of individuals’ rejection of data-collecting devices or because of an improved culture of security around digital devices. High and low interest in connected objects/AI are expressed through what we analyzed from individuals’ attitudes and motivations regarding AI and the behaviors and practices of use they engage in. High and low interest in connected objects/AI refer to the degrees to which individuals, businesses, and governments take interest in AI technologies and assess the benefits and drawbacks of its various applications. In a society with high AI interest, individuals are knowledgeable about developments in AI and their implications; in a society with low AI interest, developments in AI may be widely used, but there will be little awareness of or interest in AI from the culture at large. In what follows, we offer a brief description of the scenarios developed based on these and associated change factors.

“Al for the Best” (A Utopian Scenario)

This scenario is characterized by strong interest in AI combined with a high level of datafication of individuals. In this scenario, individuals view new technologies with enthusiasm. They are expert users of connected objects and are eager to exploit their full potential, treating them as instruments to facilitate community-building and individual growth. Individuals in this scenario are proactive and willing to share—in real time, if possible—the data they generate with the manufacturers of connected objects and service providers. They feel safe and secure and are convinced that this is the best way to contribute to the continuous improvement of user experiences. They are confident and feel free, and they view the impact of AI on their QoL as positive.

In the “Al for the best” scenario, AI becomes a natural extension of the human body. At the same time, individuals develop genuine connection with AI (made possible by the widespread deployment and generalization of connected objects). Artificial intelligence reassures users; it follows and supports individuals’ needs in real time. In this scenario, one assumes that there is no possibility of AI harming the individual. Secure in their use of AI, individuals are able to use it to maximize their QoL in the long term.

In this scenario, the design of AI technologies is informed by the high level of overall datafication of individuals, allowing various highly personalized services to be offered to the users. In the “Al for the best” scenario, safety and security experts face a surprising question: What is left for them to do? What does society expect from them?

“AI Down”

The “AI down” scenario is characterized by strong interest in AI combined with a low level of individual datafication, where vast data processing capabilities are nevertheless available to individuals. In this scenario, while connected objects continue to accumulate, they do not offer many new opportunities for users nor new security and safety risks. The reason for this is that entities and individuals are unable to transform the data collected into actionable information or relevant knowledge in their particular context. Individuals’ QoL is partially impacted, since interest in AI does not result in significant improvements in digital technology or widespread cultural developments, leading to frustration and a sense of stagnation.

In the “AI down” scenario, the design of AI technologies is affected by weak datafication, and there is a lack of highly personalized services available to users. In this scenario, safety and security experts are mainly concerned with the future of AI and its potential uses, asking what the potential of AI is and when it will be fully realized. It is unlikely that AI affects QoL, since it seems impossible for technological developments to produce changes in culture or daily life.

Consequently, said security experts must react to new developments in the same manner as users to predict as early as possible potential security and safety threats and opportunities produced by new services, objects, and classes of connected objects.

“AI for Business”

The “AI for business” scenario is characterized by low interest in AI combined with high levels of datafication of individuals. In this scenario, individuals have only slight interest in AI, although they individually and collectively generate ever-increasing amounts of data through their numerous daily transactions. While the datasets they produce provide information and high value-added knowledge to businesses and government entities, the safety and security of users are also well ensured.

In the “AI for business” scenario, the data that are produced form a “knowledge pool” (i.e., a commons) that is made available to innovators to help them accelerate the development of prototypes and shorten the time required for them to reach the market. Individuals contribute to open innovation dynamics [15], which most actors in the AI field, including computer security experts, have widely embraced. Individuals’ QoL is significantly impacted because AI users, whether they approve or not, contribute knowledge that leads to innovations in everyday life technologies.

In this scenario, the design of AI technologies benefits from high levels of datafication, but there is a lack of highly personalized services offered to users, as the users do not express a need or desire for these services.

In this version of the future, the large amount of user data available to safety and IT security experts allows continuous improvements to data and infrastructure security and greater awareness among experts of the challenges associated with cybertechnology. This ensures that businesses do not simply take advantage of individuals’ data for their own benefit. Experts are concerned about establishing and maintaining high standards of cybersecurity to ensure the safety and security of individuals and maximize their QoL in the long term.

“AI Freeze”

The fourth and final scenario, “AI freeze,” is characterized by low interest in AI combined with low levels of datafication of individuals. In this scenario, AI consists of a variety of gadgets. However, most people are afraid that, sooner or later, a disaster will occur that compromises their safety and security. There is a lack of confidence in the manufacturers of connected objects and service providers, which ultimately impacts users’ QoL. The latter ask themselves questions such as “When is it going to happen?” “Will that connected, autonomous vehicle be hacked and cause the death of one or more bystanders?”

At the same time, people distrust the large companies that collect, analyze, and perhaps even resell their data. From the perspective of safety and security experts, the “AI freeze” scenario is not a “surveillance” scenario. However, no one questions the importance and necessity of the continuous monitoring of data traffic. Meanwhile, no one assumes responsibility for detecting attacks and other fraudulent behavior, which can be potentially catastrophic, putting individuals’ safety, security, and long-term QoL at risk.

In this scenario, the design of AI technologies is limited by low levels of datafication, and highly personalized services for users are lacking.

Discussion

There are numerous consequences of the development of AI for individuals, public agencies, companies, and communities. Studies that analyze the implications of AI must therefore consider the issue at both the individual and collective levels, including those focusing on AI’s impact on QoL. Trends at one level of analysis can have chain effects with consequences for others, such as changes in collective employment that impact individuals’ health. However, in considering these facets, research on AI’s social impact should ultimately assess whether AI helps to meet humanity’s needs or, on the contrary, creates additional problems. To predict AI’s future influence on a society, researchers must specifically consider the perceptions of the individuals in that society regarding AI, since cultures and value systems around the world vary widely and are subject to continuous evolution and change. Expectations regarding standards of security and safety, for instance, may differ from one population to another. Furthermore, it is important to consider the perceptions of individuals in their particular context, because this ultimately determines individual attitudes and drives behaviors. These perceptions, which influence social action, can also vary from one individual to another. To predict the potential of AI on QoL in the future, individuals’ perceptions of AI must therefore be a key consideration.

Our study, which surveyed 1000 university students in Western Switzerland regarding future AI scenarios in the context of workshops on this topic, increased these students’ level of familiarity with AI, allowing them to gain experience thinking about this topic while addressing additional issues, such as personal security and safety, that they may not have considered elsewhere. The study also contributed to individual datafication, since Futurescaper stores the responses used to develop its plausible future scenarios in a large server. Our study corroborates the idea associated with the quantified self-movement that practices of ‘measuring yourself’ using technological innovations can be highly beneficial to the individual. The quantified self-movement, first described in 2007, encourages self-knowledge through the collection and analysis of data relating to the body and its activities. Recent studies [16] have argued that “quantified selfers” are the individuals with the highest levels of datafication and interest in connected objects. Unfortunately, it was beyond the scope of this study to consider the degree to which participants identify with this movement.

In different research settings, researchers have also defined and evaluated the technology of wearables [17]. It would be worthwhile, in future research, to imagine or ask participants to imagine future individuals’ attitudes and motivations regarding wearables that incorporate AI and their behaviors and practices of use (or non-use). Based on the findings of studies like that of Estrada-Galiñanes and Wac [17], which analyzed 438 off-the-shelf wearables, new questions along the following lines could be introduced to the survey given to students (see Table 18.1): What impact could the use of small personal devices have on individuals? Could these devices be considered a new organ of the body? Since these devices can be worn on one’s head, wrist, legs, torso, arms, and ears, do the quantified selfers who wear them perceive themselves to be advanced people, cyborgs, or humanoids?

Our study has certain methodological limitations. One of these is the fact that the merged crowdsourced change factors used to develop the future scenarios described above were derived from only one particular population or category of stakeholders [18], namely students from Western Switzerland. Although these students may correctly identify future trends related to personal security and safety in connection with AI, they are not representative of the entire population of Switzerland. Furthermore, the facilitator in this study played a determining role in how students engaged in the foresight process. The participants also may have been more motivated to participate because they knew each other and belonged to the same school. One can imagine participants having very different attitudes regarding the future of AI in other study contexts (e.g., alone at home or in a country at war such as Syria). The results could also be affected by participants’ level of technical mastery or their motivation to use an electronic tool like Futurescaper, or scenarios in which participants are not allowed or able to use such tools.

Despite these limitations, our study contributes to research on QoL technologies [19] and their future implications. As greater social interaction in environments such as the university classroom improves students’ learning and affects future behavior, our study demonstrates how QoL technologies like Futurescaper can enhance learning by allowing experimentation with new forms of social interactions and intelligence that individuals ordinarily would not directly or spontaneously engage in, while promoting collective learning rather than solely individual learning. By inspiring proactive attitudes towards the future, Futurescaper and similar QoL technologies encourage participants to renew their commitments to particular courses of actions or, on the contrary, revise them. In the future, QoL technologies may be a common good that any individual (or other form of intelligence) in the world has the right to freely access and use.

Concluding Remarks

In this chapter, we have examined perceptions of personal safety and security in the context of today’s cybertechnology—specifically, the development of AI solutions and services. We have also discussed the implications of AI in daily life for future QoL. In particular, based on the survey responses of 1000 bachelor’s- and master’s-level students in Western Switzerland, we have generated four scenarios for the future of AI using Futurescaper, a platform that facilitates foresight processes. Ultimately, these scenarios may serve as a test bed for experts’ future research. We have also indicated several strategic options and policies that can improve safety and security in the future. Based on the lessons these scenarios present, readers of this chapter may adjust their decisions regarding the use of AI or even consider alternatives that they had not thought of previously. By representing the potential implications of AI in different plausible scenarios, this study also demonstrates the need for future research focusing on the ethics of AI use, citizen’s vulnerability to AI-related threats, and the relationship between AI and QoL.

Building plausible forward-looking scenarios and assessing them in relation to various strategies or policy options gives governments, businesses, and individuals the opportunity, at a relatively low cost, to anticipate the direct and indirect long-term impacts of particular decisions on personal security, safety, and QoL. Foresight processes like those promoted by Futurescaper can benefit organizations, communities, and societies around the world. In the domain of public governance from the local to national levels—where decision-making is often plagued by instability, uncertainty, and contradiction—the possibility arises of institutionalizing foresight practices, which are remarkably inexpensive and accessible in comparison to the strategic gains they provide.