Introduction

Intelligent Environments (IE) are physical spaces equipped with sensors, actuators, and other smart technologies that can detect and respond to the behavior of their occupants, making the environment more efficient, comfortable, and safe. In this context, designing genuinely human-centric Intelligent Environments require prioritizing enhancing people’s lives and well-being over any potential compromise [44]. To achieve this goal, it is essential to protect human agency, control, and empowerment in a meaningful way [4]. By doing so, individuals can think and act freely and make creative choices that preserve freedom and democracy, both on an individual and collective level [46, 47].

Currently, a growing body of research efforts is underway in the intersection of Human Computer Interaction (HCI), Internet of Things (IoT), and Artificial Intelligence (AI) [4, 37, 46], in close interaction with research communities and societal actors that have human behavior, human-technology experiences and human rights high on their agenda [23]. By prioritizing the human experience and rights, the development of human-centric Intelligent Environments can lead to a more ethical, inclusive, and sustainable future for IE technologies.

When looking at existing Human-Centered frameworks in Intelligent Environments, it can be observed that they provide effective user monitoring and incentives for participation [11]. Further, the current technical specifications are specific tools for citizen and societal participation, but only as conceptual, technical architectures, applications, and deployments [56]. While these types of frameworks have their merits, they have limitations in prioritizing human and societal interests [33], particularly in safeguarding user empowerment, inclusivity and inclusion, human control, and meaningful human involvement [56].

In light of new EU regulatory requirements, this paper outlines how the shift towards future Human-Centered Intelligent Environments [37, 46] and the current and future technical visions can contribute to genuinely humane, fair, and equal real-world outcomes [13,14,15]. Currently, the envisioned human-empowering design outcomes are detached from the technical implementations and the realized real-world outcomes in Intelligent Environments [56]. The technical contributions largely focus on human well-being; human privacy, security, control; and end-user involvement [56]. Even so, the underlying interpretations of human-centredness remain fragmented, lacking shared guidelines and design frameworks [18].

To develop human-centric Intelligent Environments that can be genuinely humane, key features and mechanisms that are trustworthy, sustainable, safe, and inclusive, need to be envisioned through a coherent, multi-disciplinary lense [57]. Recent studies have involved multi-disciplinary experts in foreseeing how ethics, human control, and agency will be preserved in the future digital systems as AI spreads [3, 42].

The prime objective of this research is to build consensus and robustness around understanding future Human-Centered IE’s technical design and implementation. In this paper, we, therefore, apply systematic participatory Foresight methodology where multi-disciplinary future visions are mapped against achieving a truly human-centric Intelligent Environment. Foresight science is a protective tool to safeguard against human and societal harms and guide future technical developments toward genuinely human-empowering outcomes [31]. An important aspect is understanding likely developments, the desired or undesired outcomes, and what may lead to future imagined visions that are feasible, desirable, and possible. To understand relevant developments and explore different future visions, we have engaged various stakeholder groups from policy, academia, end-users, and industry with a leading role or awareness of creating the future vision for Intelligent Environments. We apply Horizon Scanning in combination with eight qualitative in-depth semi-structured interviews with experts. We also bring in a genuine Human-Centered perspective among the stakeholder groups by involving six lead users in a focus group.

The following research questions guide our work:

  • RQ1 What are the current trends, developments, and underlying drivers that have the potential to significantly impact future scenarios?

  • RQ2 What are important barriers and issues that will drive or hinder the realization of humane or humanity-centered frameworks and methods?

  • RQ3 Which visions and future developments are feasible, desirable, and possible?

  • RQ4 How can developments be steered in the desired direction? What can be the united vision?

The structure of the paper is as follows: In Sect. “Background”, we provide an overview of the developments related to humane and humanity-centered approaches in Intelligent Environments, along with an examination of existing research on future multi-disciplinary visions and frameworks. Section “Methodology” outlines the ForSTI research methodology employed in this study and describes the mixed-method approach used. The findings are presented in Sect. “Results”. Initially, we comprehensively analyze the broader environment by considering various social, economic, technological, environmental, political, and value factors. Subsequently, we identify the key factors that contribute to a humanity-centric vision, leading to the proposal of future frameworks, methods, and solutions. Furthermore, we juxtapose positive and negative imaginations of the future to represent alternative future states. Section “Discussion” offers a concise discussion of the core findings to elucidate the gap between the feasible, possible, and desirable future visions. Subsequently, we present the limitations of this study in Sect. “Limitations and Future Research”. Finally, in Sect. “Conclusion”, we conclude the paper by highlighting the implications for future research.

Background

The backdrop

The technical field of future intelligent environments has evolved rapidly, along with the role of humans. Very recently, the regulatory requirements in the digital environment, manifesting themselves globally and led primarily by the European Union, are catching up [13,14,15]. A Human-Centered policy push has brought forward well-intended design visions and aspirations where humans’ autonomy, empowerment, trustworthiness, and ability to participate as free agents have been prioritized [16]. Even so, to have a technical design foundation based on democratic values that can achieve fair and trustworthy realized outcomes, the power gap between those deploying the technology versus those subjected to it must close instead of widening [6]. As the concept of a network evolves into next-generation intelligent environments, the way it is built also changes, along with an expansion of the Human-Centered design sphere [22]. In particular, this evolution also has implications for intelligent network technology’s societal and human counterpart and the overall impact of interactive and intelligent systems on everyday life is becoming more and more pervasive and distinct [35, 46]. Accordingly, a change in “zeitgeist” is developing from different perspectives. The key objectives in this respect are to ensure that humans’ mind-space gets the highest form for protection, to promote long-term human safety and well-being, to ensure meaningful control and empowerment, and to safeguard people’s ability to engage in democratic, free societies [37, 43, 46]. Broader, emerging trends are pointing towards the importance of more direct participation and genuine empowerment of humans and society when interacting with the Intelligent Environment [57]. Thus, human-centredness is becoming more central to future technical contributions in the field of IE, yet its interpretation is not clearly linked with a shared Human-Centered design paradigm [56].

Humane and humanity-centered design principles in IE: status and challenges

Several Human-Centered concepts, such as User Experience and Quality of Experience, have been pushed forward in the last decades as vehicles to ensure that human and user perspectives are actively considered [54]. However, even though both consider pragmatic and hedonic aspects, expectations, and outcomes of the interaction with a system or service, their scope remains limited. They do not easily allow to scale up to a more “Humanity-Centered” perspective and associated higher-level values. Building Intelligent Environments that genuinely benefit human users’ lives requires, however that the community and societal well-being [47] are also considered. Humane or humanity-centered intelligent environments are when the system design, service, and IE operations are designed so that human users, communities, and societal needs have been systematically prioritized in the design [23, 43]. That means that the technical design has prioritized the needs of humans and society first, and early enough, so that the system adapts to what those are, but also keeps humans and communities safe and protected, with the ability to act as a self-directed agent [46]. However, designing for such a humanity-centric vision poses several challenges related to the technical frameworks and methods of Intelligent Environments [39, 56]. Increased multi-disciplinary research, as put forward in this paper, can help shape a future human-centric roadmap for Intelligent Environments and re-align towards a coherent understanding that benefits humans and humanity first [57].

Theoretical design paradigms provide the foundation for the technical design and evaluation of human-computer interaction concerning Intelligent Environments [36]. The technical contributions and the underlying Human-Centered theoretical perspectives in Intelligent Environments remain somewhat fragmented and correspond to various understandings [56]. However, a stronger link between Human-Centered theory and technical configurations in Intelligent Environments can produce trustworthy, safe, and human-empowering technical solutions [23].

In order to design for outcomes that protect human users and society, it is not sufficient to have ease of use, but consider meaningful human control, involvement, and empowerment [7]. Factors such as privacy, security, well-being, or just experience that makes it easy for humans or society, are typically addressed in Human-Centered Intelligent Environments [56]. In the context of recent innovations in digital technology, the target is meaningful control, autonomy, and involvement [7], as a key component of human and societal well-being, which includes identifying aspects that can promote or undermine human user autonomy [39].

These human-empowering design outcomes are increasingly targeted to meet the needs of communities served by IoT/IE technology. This way of targeting the design approach helps determine what technical implementation works best for humans and society first and foremost [43]. The current interpretation of the Human-Centered (or people/citizen-centered design approach), when targeting non-functional human-empowering IoT design outcomes, such as meaningful control, or empowerment [56], needs to be balanced with the radical notion that adding technology will not always make the experience better and that the answer to making something better will not always be technology [7].

Furthermore, there are challenges related to establishing a clear link between more Human-Centered outcomes and the technology implementation, lacking shared guidelines and design frameworks geared towards technical domains [18]. Consequently, the technical contributions do not reflect the degree of human agency lost [26], who is influenced/impacted, the best/worst case design scenarios, and the positive and negative associated real-life outcomes [26, 45].

Future research towards human-centric visions, methods and frameworks in IE

The existing scientific body of research for future visions, methods, and technical frameworks related to human-centric intelligent environments primarily focuses on the technical systems’ ability to take care of human users’ needs and preferences by detecting their presence and characteristics [35], enhance human-centric sensing functionalities [11], and more transparent/accessible/easy-to-use interfaces [41, 51], in order to support humans when interacting with the environment.

There is, however, a growing trend among diverse stakeholders to prioritize sustainable and humanity-centered design criteria to address the technical IE model’s limitations on human and societal well-being [4, 34, 55]. Such research developments focus on establishing sustainable, ethical [42], and value-based [46] technical frameworks and methods. The identified future challenges include consideration of social impact [29, 48], human impact [17], ethical considerations [5], and privacy laws and regulations [24].

Various technical frameworks have been envisioned to increase meaningful human involvement and control in Intelligent Environments. These frameworks include, for instance, Human/Society-in-the-loop [1], Privacy Laws, and Participatory models [47, 56]. Examples of technical frameworks in the artificial intelligence/machine learning fields include Human-Centered AI [43], Explainable AI [27], and Digital/AI Ethics [12]. For example, participation and collaboration with human users include being asked for data/information or providing feedback or input into the technical aspects of the Intelligent Environment eco-system [32, 56]. The human-centric IE technical system allows humans to participate by giving feedback, confirmation, or support to protect their privacy, security, or well-being [47]. Additionally, some technical frameworks emphasize the insights and actions that can meet personalized needs [21], promote efficient technical operations modeled on human and social emulation [11], or provide incentives for human participation to enhance technical capabilities [58].

Despite efforts to develop human-centric technical IE models, there is still considerable room for improvement. It has been argued that these models often operate without “meaningful” influence from human users or societal actors and fail to adequately protect and safeguard human interests and agency [4, 26], despite having Human-Centered interfaces and considering human-centricity at the surface level.

Multi-stakeholder visions and frameworks

The key to achieving genuine human empowerment and representation is to bring together multi-disciplinary teams [20] and include a range of perspectives in the technical road-mapping exercise. Participatory Foresight methodology (ForSTI) allows for broadening the scope beyond the existing emphasis and addressing skews and biases towards particular communities or negatively affecting vulnerable groups [31]. Furthermore, a wide range of multi-disciplinary perspectives involved in participatory ForSTI allows for exploring the frameworks and methods to achieve a clear-eyed technical model of user empowerment, inclusivity, human control, and involvement in Intelligent Environments.

Prior studies in this area have had less emphasis on investigating coherent, multi-stakeholder visions, technical frameworks, and the corresponding methods for a more humane or humanity-centered Intelligent Environment [20]. More specifically, earlier research on this topic has identified specific design challenges for intelligent environments concerned with achieving a coherent, multi-stakeholder vision that can be operationalized technically [56, 57]. Those design challenges are:

  • How to achieve a shared Human-Centered theoretical vision that can be translated into technical operations?

  • How to incorporate multi-disciplinary skills and knowledge into the traditional technical configurations?

  • How to establish a multi-dimensional design framework that balances networking automation with human performance measurements in the IoT networking case?

The future of human agency has been investigated in the context of digital systems and AI, where multi-disciplinary experts from technology innovation, developers, business, policy, academics, and activists responded. The report concludes that experts dispute the level of control the general public will retain over essential decision-making as AI spreads into digital systems [3]. Comparatively, a multi-disciplinary understanding of ethical autonomous/AI systems has emerged in the context of pervasive and autonomous systems [42, 43]. For instance, the aligned and shared technical framework presented by IEEE (The Institute of Electrical and Electronics Engineers) for autonomous systems translates into a holistic approach to the technical design, service, and operations of autonomous systems. The requirements embody: (1) the highest idea of human rights, (2) approaches to prioritize the maximum benefits to humanity and the natural environment, (3) strategies to mitigate risks and negative impacts as autonomous systems evolve as socio-technical systems [42]. Other multi-disciplinary ethical design frameworks include the EU policy initiative, The Ethics Guidelines for Trustworthy Artificial Intelligence [16]. These guidelines set out a framework for achieving Trustworthy AI. Trustworthy AI has three components, which should be met throughout the AI system’s entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations; (2) it should be ethical, ensuring adherence to ethical principles and values; and (3) it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm [16]. According to the guidelines, the corresponding assessment/requirement list emphasizes;

  • Human agency and oversight

  • Technical robustness and safety

  • Privacy and data governance

  • Transparency

  • Diversity, non-discrimination, and fairness

  • Societal and environmental well-being

  • Accountability

To summarize, keeping this general background and related work into account and building upon the findings from [57], we argue in this paper that an underlying reason for the existing discrepancies between theory and practice is that there is no rooted, common understanding and consequently, no joined, a broadly-supported vision that can guide the design of future, Human-Centered intelligent environments. Utilizing a Foresight methodology, building upon a participatory approach, and considering different perspectives, can help to fill this gap and to advance the next-generation Intelligent Environment technology roadmaps in a more humanized direction, carefully considering undesired outcomes and consequences and ways to avoid them [57]. In Sect. “Methodology”, the adopted methodological approach in this study is explained in detail.

Methodology

Overall research design

In order to address the research questions introduced in Sect. “Introduction”, we used a mixed-method, foresight-based strategy of inquiry. The mixed-method research paradigm, grounded in the pragmatic worldview [9], is problem-centered and values different forms of data. It allows researchers the freedom to combine quantitative, pre-determined methods with more emerging, qualitative approaches that emphasize interpretation and situatedness. Mixed-method research can also provide rich possibilities for triangulating different data sources depending on the actual research design. The latter was an important element in this research.

Secondly, the research design is based on foresight principles in order to foster a forward, future-oriented focus. As outlined in Sect. “Background” and emphasized in prior work, having a shared future vision and commonly supported desirable direction is considered of paramount importance to steer toward genuinely Human-Centered Intelligent Environments [56, 57]. For this reason, we adopted a ForSTI (i.e., Foresight in Science, Technology, and Innovation) approach [31]. The latter refers to an approach aiming towards understanding and exploring different future scenarios to get better insights into what different stakeholders and perspectives consider as possible, feasible, and desirable developments, which trends and factors might influence these developments, and what would be needed to get there. Foresight has been defined as: “The applications of systematic, participatory, future-intelligence-gathering and medium-to-long-term vision building process to informing present-day decisions and mobilizing joint actions” [30]. Future vision building involves the creation of long-term, overarching scenario narratives. In this paper, we make use of the terminology, future visions, when referring to such long-term scenarios [31]. Further, Foresight represents a pluralistic process that usually considers multiple futures, which is participatory by nature and action-oriented [31].

In this research, an important consideration was the incorporation of a multi-stakeholder perspective in the exploration of possible, feasible, and desirable future developments. Such an approach is considered to be particularly important in the context of Human-Centered IoT and intelligent environments, as the goals, interests, and anticipations of different stakeholders (e.g., end-users, policymakers, technology developers) may strongly differ [56, 57]. In addition, emphasis must be placed on placing the Foresight activity into a broader external context, constituted and influenced by the Social, Technological, Economic, Environmental, Political, and Value (STEEPV) systems [40]. According to Saritas [40], the goal of a Foresight process is often to make changes or improvements to one or several such systems. However, as they are so strongly interrelated and bear strong interdependencies, they need to be considered together. For this reason, the STEEPV framework is also used to categorize the findings in Sect. “Emerging trends for future humanity-centered intelligent environments”. According to [28], key questions to address in this context are “what is possible?” (situated more in the realm of the fields of science and ecology), “what is feasible?” (situated in the realm of technology and economics) , and “what is desirable?” (tapping into aspects of governance, regulation, policy, values, social acceptability) [40].

More concretely, in order to thoroughly map the current developments, trends, and drivers that may impact future visions and scenarios, to elicit different perspectives on possible, feasible, and desirable future visions, as well as potential barriers and undesired outcomes in this respect, we adopted a transformative, mixed-method approach. As part of this overall research design and enquiry strategy, several methods were used. First, we used the “Horizon Scanning” method, conducted by extracting data from a Foresight Platform Database called The Futures Platform [49]. Secondly, we used the qualitative interview method. Semi-structured interviews were conducted with experts. Lastly, we adopted the focus group method. A focus group was conducted with end-users, which led to revisiting the horizon scan (hence, we refer to the approach as “transformative”). The combined insights serve as the most important output of the “intelligence phase” in a broader foresight approach. In the following, we introduce the different parts of the research and corresponding research methods more in-depth.

Horizon scanning

Rationale

Horizon scanning activities have increasingly been used in the last decades as a tool for evidence-based policymaking, e.g., in terms of research and innovation. In particular, it has been used to understand the broader societal, economic, political, and environmental implications of new technologies [2]. The horizon scanning method is considered as a typical foresight method. It has been defined as “\(\ldots\) the systematic examination of potential (future) problems, threats, opportunities and likely future developments, including those at the margins of current thinking and planning” [52], which may encompass both unanticipated, unexpected events and issues, as well as existing problems that have manifested themselves for many years already, trends and so-called “weak signals” of change [52]. Beyond the emphasis on the systematic nature of horizon scanning activities, other authors [25] have emphasized aspects such as “a creative process collective sensemaking” and “formulation of pertinent future developments” towards actionable insights.

In this project, we used horizon scanning as a systematic and participatory activity to identify ongoing developments and trends, as well as signals of change (and the direction and strength they may indicate) in the broad area of intelligent environments and the technologies enabling such environments. The primary rationale behind this approach is that these developments and trends may strongly influence the envisioned future visions and associated actions and, therefore, need to be mapped and better understood so that they can be taken into account as a part of the visioning process. An open, exploratory approach was combined with an intelligent environment-centered scanning, following the recommendations from [2]. Such a combination allows identifying trends in line with how they are expected to develop [2]. The exploratory approach is centered around the identification of so-called Weak Signals, and Wild Cards [38]. On the other hand, the issue-centered scanning approach builds upon previously identified primary signals, that are Strengthening or Weakening. This combined approach allows for studying trends (i.e., “gradual forces, factors and patterns that are pervasively causing a change in society”) [31] and the drivers that influence them, emerging and weak signals (i.e., “first indications of an emerging future change associated with society, technologies, innovations or other domains”) [31], as well as potential Wild cards. The latter come more as a surprise and are unanticipated and unlikely to happen but have a potentially high impact if they occur [31]. Therefore, they should not be completely ignored, as they may help to increase the overall ability and readiness to react.

Procedure

We conducted an in-depth analysis using the Futures Platform [49] for the initial horizon scanning. The latter is a research-based and expert-generated database of future trends and related signals, where the picked-up phenomena are characterized into the following types: “Strengthening, Weakening, Wild Card and Weak Signal” [49]. The horizon scanning procedure involved exploring innovative ideas and patterns of future change detected in the futures platform database [49], then validated by experts or lead users in an open-ended qualitative data-gathering exercise. The important validation points are phenomena or signals (data points), trends (lines and circles), emerging issues, or primary developments (colors) [31]. The identified phenomena are visualized in a trend radar (see Figs. 1 and 2, covering different time horizons: i.e., the short-term (inner circle), medium-term (middle circle) and long-term (outer circle) time horizon and types of signals. More concretely, the identified phenomena are visualized as Strengthening (marked green), Weakening (marked blue), Early (marked grey), or Wildcard signals (marked red).

Expert interviews

Rationale

In addition to the horizon scanning based on secondary sources (i.e., reports, articles, policy documents, and evaluations extracted from the Futures Platform), we also conducted semi-structured interviews with experts that could be placed to different stakeholder perspectives. Expert interviews or expert consultations are commonly used in Foresight activities, particularly horizon scanning [2]. The underlying goal was to bring together a multi-disciplinary expert team and—following the recommendations in [25]—to ensure that a range of perspectives is considered when addressing the research questions. Through the expert interviews, the aim was to collect the experts’ views on and understanding of future visions of genuinely humane or humanity-centered design of Intelligent Environments, the trends and factors as well as barriers likely to play a role in this respect, and the potential future development. The involvement of experts from different fields, as well as six lead users/consumers (as will be explained in Sect. “End-user workshop”), was done to ensure a comprehensive and holistic analysis of the issues and to address skews and biases towards particular interests [19].

Procedure

The semi-structured interviews were based on an interview guide, following established best practices [50, 53]. The interviews started with a number of warm-up questions, were followed up by the core reflection questioning part of the interviews, and finalized by a number of round-off questions. The core part of the interview started with a set of questions about future trends and their importance and expected development; then the questioning turned to Human-Centered frameworks, technological developments that are relevant for the broad area of intelligent environments, and how these may impact future visions (considered at different timescales). Next, the experts were asked about their views on both positive and negative future design visions and expectations, what they consider to be possible, feasible, desirable, or rather to be avoided developments and visions. Finally, they were asked to share their thoughts on which Human-Centric frameworks, as well as technical mechanisms and operationalizations they consider relevant to (1) realize these visions and (2) avoid negative and undesirable developments.

A Norwegian Agency approved the study for Shared Services in Education and Research in terms of personal data protection requirements and GDPR compliance. The interviews took place via Zoom videoconferencing platform and lasted 1 to 1.5 hours. At the beginning of the interview, the interviewee was briefed about the scope of the study and asked to sign the informed consent form. The interviewer (the first author) had prior extensive experience with interview-based research. The interviews were transcribed using a summary transcription method, as the goal was to capture main themes, trends, and issues. They were further clustered and categorized using a number of frameworks, e.g., STEEPV, trends and signals, and future visions.

Sample description

A list of potential experts was prepared based on academic literature, policy reports, white papers, industry briefs, and professional referrals. In this regard, it was ensured that different stakeholder perspectives would be represented. The potential interviewees were contacted via e-mail and invited to an interview. Eventually, the expert panel consisted of eight participants with a global and influential role that could provide a solid understanding of the future technical developments in Intelligent Environments, including industry practitioners, policy leads, academics with technical engineering expertise, and both socio-technical design and engineering expertise. An overview can be found in Table 1.

Table 1 Overview of multi-disciplinary interview participants

End-user workshop

Rationale

Finally, to ensure participation not only from experts but also from end-users in the horizon scanning and joint visioning, an additional workshop was organized. The latter was an in-person workshop organized as a group interview or focus group discussion. As the involvement of end-users in foresight activities can be challenging due to, e.g., limited imagination (see e.g., [10]), an adjusted interview guide and format was used. More specifically, a number of prompts were used to trigger the discussion.

Procedure

Overall, the session followed a similar approach as the semi-structured interviews: it started with a warm-up phase, was followed by a generative and reflective phase, and was rounded off with a number of closing questions. An overview of the session organization can be found in Table 2.

Table 2 Workshop session organisation

The workshop was structured into several sections with concrete tasks designed to facilitate active participation, knowledge sharing, and collaborative idea generation. The first session began with an introduction and ice-breaker activity; then, participants were presented with an overview of humanity-centered concepts in Intelligent Environments. Next, fundamental principles and examples of frameworks related to digital ethics, human well-being, the right to privacy, technical transparency, and digital humanities were explained and discussed. The trend scanning section consisted of an individual brain-storming exercise, where each participant identified the emerging trends and potential issues that they were aware of or familiar with related to the topic of humanity-centered intelligent environments. They then ranked the identified issues based on their perceived importance and potential impact on future development. After a short break, the participants actively participated in group ideation sessions, generating positive and negative imaginations and identifying the associated outcomes and solutions tied to future developments. The future imaginations are pre-cursors to scenarios that consist of a combination of multiple factors. These future visions are developed by imagining a particular future trajectory and asking “how” these developments can come about [31]. Finally, the ideas and solutions generated were assessed in a plenum. Feedback was given regarding any expected technical challenges, barriers, and, if no barriers were present, what the solution would be.

Sample description

A total of six participants were recruited to capture key viewpoints from the end-user perspective. The workshop was conducted to collect viewpoints, thoughts, and other feedback on future developments toward a more humane or human-centric technical Intelligent Environment. As the topic is fairly new, we recruited based on the target criteria:

  • Lead users: Innovators and early adopters

  • IE awareness and experience: closely follow the development of technologies for smart home systems, lightening systems, or healthcare

  • Age: younger in age group (18–35)

  • Gender: aimed for a 50%/50% balance, but achieved 20% female and 80% male

By recruiting lead users, the analysis aimed to tap into their forward-looking mindset and potential to shape the future. The sample characteristics included consumers or lead users who considered themselves early adopters and had experience with new smart environment technologies. The sample recruitment process involved widely advertising the workshop through LinkedIn and several University digital notification boards. Participants were screened using a digital form with pre-set criteria. We recruited eight candidates, and six showed up to the workshop. All workshop participants provided informed consent prior to their participation in the study. The recommended number of participants participating in a lead user workshop for a Foresight innovation study ranges from six to twelve [53].

Results

Emerging trends for future humanity-centered intelligent environments

In order to anticipate desirable or undesirable outcomes and determine preferable directions for future scenarios in the realm of humanity-centered Intelligent Environments, it is important to first of all identify the macro-factors or broad patterns of change. These macro-factors can be observed by examining long-term changes and help us comprehend the forces that shape the system’s evolution and potential surprises that may arise.

Table 3 Emerging factors shaping the broader (contextual) environment

To do so, we applied the STEEPV analysis framework [31], introduced in Sect. “Methodology”. The results presented here are based on a triangulation of the secondary horizon scan, input from the expert interviews, and lead users/consumers, as explained in Sect. “Methodology”. The triangulation process involved the cross-referencing of data, a comparison and validation of emerging trends, and synthesis of the most prominent, robust, or innovative developments. These macro-factors encompass social, technological, economic, environmental, political, and value-based drivers that influence progress and potential obstacles that may impede it. Table 3 illustrates the macro-factors by STEEPV category, that were identified through the conducted studies and the stakeholder groups that brought them up. Moreover, Fig. 1 provides a summary of the identified macro-factors, expected likelihood, and trajectory over time. The particular factors that are relevant to the context of Intelligent Environments considers the socio-technical system, human-environment interaction design, and public involvement, while some developments can be relevant to digital life in general.

Fig. 1
figure 1

Macro-factors—summarized

A first observation when discussing the broader macro-factors is that the multi-disciplinary stakeholder groups identify differing primary developments (i.e., previously identified primary signals (marked green if strengthening or blue if weakening)) and critical uncertainties (i.e. emerging weak signals (marked grey) or wild cards (marked red) that lead to optimistic or threatening outcomes in the future development of Intelligent Environments. While primary developments in this context refer to those developments with a higher likelihood of certainty, critical uncertainties refer to more uncertain yet noteworthy developments [31], also referred to in more detail in Sect. “Methodology”.

Environmental drivers: From an environmental perspective, the STEEPV categorization sheds light on primary developments such as focusing on energy efficiency and reducing environmental impact assisted by Intelligent Environments. More critical uncertainties include zero-emission technologies and technology-free zones. In the context of environmental drivers, widespread climate anxiety, sustainability requirements and measures for intelligent urban planning, and support for managing crises caused by climate change were stressed as important by the lead users, the policy lead, and academics. As expressed by the policy lead: “IoT/IE systems have the potential to perform activities that are undesirable or hazardous for humans. This includes handling boring or tedious tasks, as well as working in dangerous environments such as gas pipelines and fires. Ultimately, the ideal outcome is for the system to perform these activities that are inconvenient or dangerous for us, particularly in cases where doing so would significantly improve our daily lives” (Policy lead, Interview 3).

Social drivers: A social macro-trend is the increased need to protect human and societal well-being in digital life, which has been pointed to by both the interviewed socio-technical, engineering academics, and lead users. Increased inequality in the labor market, the digital divide, and widespread web addiction drive adverse social outcomes, leading to a situation where end-users are burdened with additional labor to be self-governing and able to control their lives. Invisible online surveillance and manipulative algorithms were mentioned as amplifiers of these drivers. On the other hand, essential social drivers that promote positive outcomes and that were mentioned include increased consumer and public awareness, growing concern for digital safety, privacy, and well-being, and a greater emphasis on ethical, technical responsibilities and education.

Economic drivers: The economic macro-factors were presented in the context of the primary developments, such as the increased value of data, the prevalence of the attention economy, and capitalism in crisis. The more uncertain developments were dataism—where data rules all decisions, along with a constant invasion of privacy, identified by secondary foresight reports, futures databases, lead users, and industry stakeholders. However, other primary economic developments, such as the growing demand for a more responsible data economy and ethical business models, were brought up by interviewed academics. What stood out in general, and that all stakeholders concur, was the dominant position of technology giants. “GAMMA” refers to the dominant digital platforms; “Google, Amazon, Meta/Facebook, Microsoft, and Apple”, and represents an economic macro-factor with significant influence on future design developments. One of the participants in the lead user workshop raised attention to this issue: “There is a possibility of global technology companies gaining control over governments in the future”, (Lead user number 4 (female, age 33)). Lead user number 6 (male, age 34) even went as far as to suggest that this concentration of wealth and corporate power could result in a mafia-like setup of our global society, further exacerbating the issue.

Political drivers: Beyond the economic perspective, primary political macro-trends were also identified as significantly influencing the direction of technical developments, including escalating geopolitical tensions, cyber politics, and the degree of investment in public technology governance. The erosion of consensus and solidarity among groups was found to pose a threat to democracy and associated with political uncertainties by socio-technical academics, secondary foresight sources, and lead users. While industry actors view the competition for digital power as a primary driving force for change, policy experts and lead users saw a challenge to the rules and allocation of digital power as a critical uncertainty.

Technical drivers: Moving on to the technical perspectives, we found that the technical race towards smart sensors everywhere, general artifical intelligence, ambient intelligence, and intelligent augmented realities was intensifying. A completely corrupted internet, AI arms race, and apocalyptic AI represent additional uncertainties to building more safety and autonomy for humans in the existing intelligent environments. Policy experts and academics in the expert interviews brought up the latter. Moreover, it was argued that the technological progress rate leads to a massive gap between technical capabilities and human acceptance, especially when integrating artificial intelligence and machine learning technologies. According to a policy expert interviewed for this study, “the rapid pace of technological advancement has brought humanity into uncharted territory." Connected concerns in this respect are related to the rapid pace of technological advancements causing a lack of human control over technology and further include a perception and expectation that cybersecurity issues in Intelligent Environments will end up unresolved. As a result, policy experts and academics fear that humans will lose their ability to prioritize human and environmental safety in technical design, which could have severe consequences.

Value drivers: In contrast to the academic perspective, the industry viewpoint emphasizes the consumer’s ability to reject technical solutions that do not meet their expectations or pose a risk to the environment or human well-being. Industry professionals anticipate that future technological advancements will prioritize more humane technology, as people will only accept solutions that enhance their lives without any potential risks. As one of the industry experts put it: “The future development will move towards more humane technology regardless of any stakeholders’ actions, as people will only accept what improves their lives. If not, they will choose differently and they will not accept any risk”. (Industry practitioner, Interview 5).

As a result, a primary value driver for the development of Intelligent Environments that industry experts identified is considered to be humans and society’s growing unwillingness to tolerate any system design faults or risks to human and environmental well-being. Policy experts share this sentiment, emphasizing that society will not tolerate accidents or harm caused by autonomous IE systems. The macro-factors that will move the dial in this direction include the significance of community involvement, holistic system design, and Intelligent/Human Empowerment amplification. The factors pulling in the opposite trajectory include techno-chauvinism and consumer apathy.

Identifying barriers and solutions

As an output of the scanning activity described in Sect. “Horizon scanning”, the Foresight Trend radar, depicted in Fig. 2, provides a mapping and visualization of the significance and potential impact of the emerging barriers and issues related to the development of future humanity-centered design frameworks and methods for Intelligent Environments. The primary developments identified in the analysis were categorized and marked using green or blue. Green was marked if gaining strength over time, while blue was marked if losing strength. The uncertainties, such as Wild cards (marked red), and Weak signals (marked grey) are characterized by high uncertainty, and have potential for disruptive or unforeseen future developments. For more detail please refer to Sect. “Methodology” covering Methodology.

Subsequently, the factors that are essential for genuinely humane design in Intelligent Environments are classified into different themes, including changes in design logic/values, processes, user-interaction modalities, architecture, responsible technology, methods for assessment and audits, methods to achieve human well-being, and safety and security features, as shown in Fig. 2 and as briefly discussed below:

Fig. 2
figure 2

Factors—Frameworks and methods summarized

Change in design logic and principles: First of all, strengthening in development are factors that are shifting the technical design logic and principles away from technology-driven approaches towards human-centric and sustainable (humans and environment first) and active participatory technical approaches. For example, in the consumer workshop, the lead users shared their concerns about the potential negative impact of paternalistic design on their lives. They worry that the technical system design may impose restrictions on their thoughts, actions, and lifestyles, regardless of whether the outcomes are positive or negative. Lead user number 7 (male, age 29) provided a hypothetical example to illustrate this concern: “For instance, if the government has full access to your power consumption through your smart house, they could use it to influence your behavior. They could tell you that you are not allowed to watch TV anymore because you are overweight and that you should go for a walk instead. In theory, this could happen.” These insights highlight the need for future technical IE frameworks to prioritize human autonomy in the design logic, and avoid designing systems that restrict users’ choices and freedoms.

Change in work and participation processes: The developments of factors that move the technical design in a more humanity-centered design direction include more participatory and collaborative processes involving not only users but also communities and diverse engineering teams in designing and implementing Intelligent Environments. On the contrary, adopting new design principles is challenging, and communication gaps between IT professionals and user experience assessors can hinder effective design. To address these issues, stakeholders need to improve communication through continuous dialogue, establish multi-disciplinary/diverse teams, educate engineers on ethics, and develop more effective evaluation methods. One of the respondents from the expert interviews puts it as follows:“New thoughts require heavy lifting, new design principles need to be adopted easily otherwise they will be ignored” (Industry practitioner, Interview 5).

Architecture: The primary factors relevant to technical architectures feature a user or human-centric perspective that captures a genuine user representation on an ongoing basis that is localized, self-managed, or governed by semi-autonomous human/citizen/community-in-the-loop technologies. However, incorporating participatory and collaborative processes is a barrier highlighted by both industry and academia in the technical architecture. Factors such as engineers’ education, pride, and obstacles related to encouraging and motivating human users to participate in development work and tools for involvement are examples of these challenges. How designers and engineers would include humane or human-empowering outcomes in the technical design is considered to be largely dependent on the intrinsic, ongoing engagement of end-users and the prioritized technical design considerations. This outlook was exemplified by an industry practitioner: “What truly matters is if you include all of the ethical and humane technical considerations or some of it” (Industry practitioner, Interview 2).

Methods to achieve human well-being and performance: The strengthening factors expected to have a positive impact on future design frameworks are those that foster widespread awareness among all stakeholders to achieve trust and societal acceptance, community education, and evaluation in practice. Academics emphasize the need for building Intelligent Environments that amplify the role of humans, people, and citizens in technical design frameworks. Conversely, industry actors argue that the existing design frameworks for Intelligent Environments already fulfill some, if not most, of the humane and ethical requirements. A practitioner from the industry articulated this viewpoint, stating: “Well-thought out principles and frameworks exist. There are numerous conferences and applications dedicated to Human-Centered AI. However, these efforts often remain superficial, with outcomes often ending up forgotten. Nonetheless, there have been successful initiatives such as the AI principles established by Google, which have become a code of practice. These principles outline specific actions that corporations adhere to, and internal stakeholders are actively involved in reviewing and ensuring compliance” (Industry practitioner, Interview 2).

The policy expert and socio-technical designers share a common view regarding the importance of Human-Centered methodologies in the design of intelligent environments that prioritize human well-being, comfort, and safety. Both emphasize the inclusion of methods that go beyond optimizing experience and satisfaction and instead consider aspects such as privacy, empowerment, accessibility, and ethics. Concepts such as user-centered design, participatory design, and empathetic design are recommended to place the needs and preferences of users at the forefront. A socio-technical academic expressed the necessity for these methods to encompass various factors, including privacy, usability, ethics, accessibility, trust, and empowerment, while ensuring technical interoperability and scalability.

Methods for assessments and audits: Future primary developments that address existing human and social limitations include the ability to assess and audit for bias and discrimination, ensure robust evaluations, and establish stringent technical standards from both social and technical perspectives (exemplified in signals civic technology, AI in participatory democracy, and corporate user rights protection). Generally, industry experts highlight the significant challenge of establishing a shared understanding of human agency, robustness, and safety in technical design, along with effective assessment in real-world contexts. However, both academic and industry experts acknowledge the inadequacy of existing frameworks, methodologies, and standards in achieving desired outcomes. These expert observations reveal that a major barrier to this approach is the lack of a definitive definition of humane or human-empowering design, as well as conflicting guidelines and difficult-to-implement technical audits and standards. This challenge is further compounded by a clash in expertise, commercial interests, overhyped technology, and diminishing human agency and control, as socio-technical and engineering academics highlighted. An industry practitioner emphasized this point: “The most critical issue is to achieve a common understanding of what humane means and stands for, and how it translates it into practice, and evaluate as well” (Industry practitioner, interview 2). It was further elaborated on by an HCI/socio-technical engineer: “I am not aware of any new factor, framework, or methodologies that will support these developments in the future. All the factors, methodologies, frameworks—all of them are ill-defined. All of them are not achieving the goals of human empowerment today”(HCI/socio-technical academic, interview 7).

Transparency, Accountability and Responsible technology: Additional factors that contribute positively to the development of humanity-centered intelligent environments involve the incorporation of transparency mechanisms that empower humans to govern the technical aspects of these environments. Future weak signals include privacy demonstrated by technical proof, and functionalities that let you opt out of the sensing environments’ automated decision making.

A significant barrier to achieve this is to obtain an accurate evaluation of trustworthy and acceptable measurement tools, especially in terms of its transparency to end-users and society. One of the interviewed academics with a technical expertise expresses it as follows: “If a concept cannot be measured, it cannot be effectively evaluated” (Technical academic, Interview 6). Comparatively, the industry experts also note a significant challenge with metrics in the evaluation of human-empowering intelligent environments. As one of them stated, there is a “big challenge with metrics. This is because not all metrics are equally important and there is a need to link certain metrics to principles and contextualize them to enable a conversation between them. While it is possible to create intricate constructs of measurements, they are only likely to work in rare circumstances. Any attempt to rely too heavily on metrics is therefore an illusion, and things can still go wrong” (Industry practitioner, Interview 2).

User-interaction modalities: Emerging, or more subtle developments, concern the development of immersive user-interaction modalities in virtual/physical spaces and interfaces that are accessible to wider and more diverse populations working or living in the public environment. To illustrate, an academic with an engineering background envisioned the possibility of developing more intuitive, transparent community interfaces for citizens’ interaction with Intelligent Environments.

Safety and security features: Strengthening developments include the use of advanced privacy and cybersecurity by design frameworks, along with control, accountability, and stop mechanisms. To illustrate, the development of control theory will, in the medium horizon, ensure the safety and security of occupants, minimizing harm and maintaining control in Intelligent Environments. There are also anticipated challenges, which the interviewed academics point out are caused by the existing design of intelligent environments that prioritizes commercial interests and consumption of services and goods. A notable concern lies in the limited consideration given within the technical design to incorporate the perspective of an active, self-governed end-user who may adhere to ethical principles but may also disrupt the system. This perspective is articulated by a socio-technical academic, who states: “Stakeholders such as place managers and device managers, responsible for safeguarding human and societal aspects in the technical system design, often prioritize security, safety, and resilience against attacks or data manipulation. While the concept of humane design may be acknowledged, it is regarded more as a means to an end rather than a fundamental objective. The primary focus is on ensuring user adoption and perception, with users being perceived as customers, walking sensors, and data generators” (Socio-technical academic, Interview 1).

Future visions

The empirical findings outlined above affirm that the advancement of future intelligent environments is shaped by several macro-trends, drivers, and barriers, which serve as the foundation for determining feasible, possible, and desirable future visions. In order to move towards genuinely humane intelligent environments or the preferred direction, it is recommended to map potential future visions through three lenses: feasible, desirable, and possible [40]. By combining trends and predictions with alternative futures, developments can be steered towards a genuine humane design logic that is linked to desirable, human-empowering outcomes. What is feasible and possible is in the context of technology and science. The desirability question extends into political, social, and value contexts that intersect with governance, regulation, and policy [40].

Table 4 Humanity-centered or humane intelligent environments—imagined optimistic futures

As explained in Sect. “Methodology”, we adopted a multi-stakeholder approach to envision alternative futures that guide the technical development of intelligent environments towards either a positive (Table 4) or negative imagined trajectory (Table 5). More concretely, the study participants were initially prompted to envision an optimistic imagined future for a truly humane intelligent environment, as summarized in Table 4. Future frameworks are expected to establish the parameters for democratic, sustainable, fair, and inclusive socio-technical intelligent environments, as perceived by the lead users and consumers. The consensus among academics (A) and industry practitioners (I) supports the formulation of a Human-Centered framework in the current or short-term time horizon (H1), which fosters active human participation and enables the seamless exercise of human and societal agency in the technical design. Looking ahead into H2, the lead users (C) and academics (A) envision the emergence of new models of technology ownership and incentives that will counterbalance the prevalent data-driven commercial business models, and the technical determinism or technical chauvinism that presents obstacles in the current environment. Furthermore, policy experts (P) anticipate the prohibition of privacy infringements within the medium horizon (H2), while industry professionals (I) foresee the imposition of mandatory ethical compliance in technical solutions in the long-term horizon (H3).

The alternative futures that can lead to negative outcomes (see Table 5) encompass violations of ethical principles, manipulative and harmful human/societal IoT loops, and an imagination where humans lose complete control over the connected environment. Notably, the industry perspective frames the worst outcomes as unintended harm, while academics, policy experts, and consumers envision a more severe and even apocalyptic impact on humanity. To exemplify, an interviewed industry expert emphasized the importance of thoughtful consideration during the development of intelligent systems to avoid low adoption rates. The interviewee expressed concerns that if the technology fails to meet people’s needs or violates ethical principles, it may face rejection and negative news, leading to a tarnished corporate reputation.

Table 5 Humanity-centered or humane intelligent environments—imagined negative futures

In the medium horizon (H2), academic experts envision an Intelligent Environment system in which the control and decision-making power behind Intelligent Environment systems do not serve the interests of the majority of humanity. Once the general public understands that this is happening, it might be too late, or we will not continue in that direction. However, the nature of this development is highly uncertain since we have no historical precedence for such a situation. Concerning the future development of human-centric IE technologies, another technical academic expressed a pessimistic imagination that the loop created to integrate human decisions into the technical environment will manipulate human thoughts and culture. To counter this, it will be necessary to foster multi-disciplinary involvement.

Another academic, specializing in Human-Centered approaches in IEs, highlighted the concerns in the context of data manipulation, stating, “While having access to data is one aspect, the more alarming issue lies in how these data are utilized to influence consumer choices, political attitudes, voting behavior, and other significant decisions that directly impact individuals. Furthermore, individuals often have no means to understand the rationale behind these decisions or to express disagreement” (Human-Centered Academic, Interview 8).

Within the long-term horizon (H3), policy experts outline the vision that the rapid pace of technical advancements and diminishing human control could result in the emergence of an intelligent environment that is excessively complex and uncontrollable. As emphasized by a policy expert in the interview: “We are entrusting complex human tasks to machines, which is an entirely novel experience. While we can rely on intelligent systems to perform specific functions, such as in the case of airplane operations, when it comes to areas like elderly care, we need to have absolute confidence that the system will deliver as expected. Otherwise, people will not be prepared to embrace such technologies” (Policy expert, Interview 5).

How can developments be steered in the preferred/desired direction?

Finally, a question that was explored is how the developments can be steered in the desired direction. Highlights of the proposed innovations and developments that have the potential to make Intelligent Environments more humane are summarized in Table 6. All experts clearly state that the ethical and humane technical developments are not challenged due to technical barriers, but due corporate incentives (Industry practitioner, Interview 2 and 5; Technical and socio-tecnical academics, Interview 4, 6, 7, 8). Therefore, new incentive models, such as transparent business models and services, are—in the medium horizon—expected to offer solutions to steer the technical development towards sustainable development goals, including good working conditions, and reduced energy consumption. A prevalent concern is the issue of dominant decision-makers exploiting or taking advantage of minority groups, regular citizens, or communities.

Table 6 Examples of proposed innovations to build more humane or humanity-centered intelligent environments

Proposed solutions include carefully considering the way in which the Intelligent Environment replaces human and societal processes as opposed to supporting them, ensuring that the technical processes are not misplaced or cause harm. There must be a realization that technology-driven approaches cannot answer all human and societal challenges. A solution includes carefully selecting the right partners, and use cases. Or as one of the academic experts put it: “Involve the right partners and work with partners with the right incentives” (Technical academic, Interview 6).

Further, it was put forward that the relevant ethical and humane design frameworks must be formalized and socialized to get operationalized. In the long-term future the bar will rise for ethical technical industry standards, ideally making them legally enforceable and accepted by all engineers. Digital corporations will have the strictest ethical requirements for product design, testing, and operation.

The barriers are manifold and mainly not technical, because the technologies are designed and implemented by humans. To achieve widespread awareness, more voices need to be involved in designing and developing the technical participation tools. The latter is also needed in order to overcome that one group controls the technical tools and is running fast towards the future. One of the experts framed it as follows: “One group has the tools that they need to build the technology, while another group do not have access to the tools to achieve the human empowering goals that they need. They simply do not have it. One group is strong, has the tools, and is running like a train to the future. And one group is with caveman tools. There is a gap in the interest and control between these groups. Like a power control” (Socio-technical academic, Interview 7). Balancing the interests through continuous assessment and dialogue with multiple groups is therefore considered needed and expected to lead to genuine human empowerment and upholding democratic values.

Non-technical solutions considered to have the most impact include the public education of the communities served by the Intelligent Environment. Further, training engineers via ethics education and orienting their mental models more explicitly and systematically towards having human users and society front and center in the technical development.

Technical solutions for the development of humanity-centered Intelligent Environments encompass one of which involves integrating civic technology design within a community-in-the-loop framework. This entails fostering multi-disciplinary community involvement to ensure active participation and collaboration in designing and implementing intelligent systems. Engaging in effective communication with individuals interested in city development is crucial for gathering valuable insights that can inform the creation of a robust theoretical framework. By actively involving the perspectives and experiences of diverse communities and citizens, a more comprehensive understanding of their needs, preferences, and aspirations can be obtained. This inclusive approach promotes a common interaction between experts and the general population, encompassing various segments of society.

Through a community-in-the-loop approach, stakeholders can tap into the collective wisdom and expertise of citizens to inform the design, development, and evaluation of intelligent environments. By fostering meaningful engagement and collaboration, the technical solutions can better align with the aspirations and goals of the wider population, ensuring that Intelligent Environments truly serve the needs and interests of the community at large.

Discussion

We now briefly discuss the main implications of the results presented in Sect. “Results”, by turning back to the research questions introduced in Sect. “Introduction”.

Current drivers, trends, barriers and the lack of alignment

Section “Emerging trends for future humanity-centered intelligent environments” provided an overview of identified current drivers, developments and trends expected to affect Future Humanity-Centered Intelligent Environments. These drivers and developments were categorized by means of the STEEPV framework [31]. A first observation is that there is no consensus among the different stakeholders in terms of the primary developments and no fully shared understanding concerning the need to envision a more humanity-centric design. There is a clear difference between industry and other stakeholders on the need to act (and what that acting implies) in order build more humane technology and safeguarding human agency technically, into intelligent environments. While fundamentally, all stakeholders agree on the need to consider and prioritize humans and society in the design, the underlying drivers for existing and future frameworks for human-empowering design in Intelligent Environments are disputed. In particular, there is a lack of alignment on the degree of technical protection that is needed, who should be protected, and what human agency stands for (academics vs. industry). Future human-centric IE scenarios should therefore also evaluate the degree of agency required. However, the lack of a clear and agreed-upon definition for humane or human-empowering design, as well as the lack of concrete standards, adequate frameworks and methods for assessing and auditing real-world effects of the intended outcomes were identified as a barrier in this respect. Further, while the larger emphasis on education, ethics, transparency, and responsibilities associated with the relevant technologies was considered highly important by several academics, policy experts, and the interviewed lead users, the interviewed industry experts did not bring this up similarly explicitly. Furthermore, one of the industry experts claimed that these ethical and humane translations are already prioritised today.

Moreover, the industry, academics, and lead users identify technological determinism or chauvinism, or enchanted determinism, as the most impactful barrier hindering the design of a humanity-centric intelligent environment. However, they differentiate in recognizing the implications of technological determinism concerning intelligent environments for human and social well-being. The costs and benefits of the technical business model need to be considered when designing for human control and agency. Interestingly, however, this driver did not appear as impactful in the literature review [56] or the secondary horizon scan. Enchanted determinism in this respect refers to a sort of technological optimism, that is magical and deterministic [8]. Enchantment shields the creators of intelligent systems from accountability while its deterministic, calculative power intensifies social processes of classification and control [8]. The existing technical IE frameworks do not consider the commercial incentives as a core barrier, resulting in unsustainable business models and technical solutions at the expense of the environment or social cost [56]. However, the costs and benefits of the technical business model need to be considered when designing for human control and agency. Negative consequences ranging from apocalyptic to low trust and acceptance from society and humans, and hyper-concentration of power and wealth are envisioned if the technical model is commercial only by design and does not consider the social and environmental impact (not driven by public good design).

Finally, a key observation based on the drivers and trend elicitation is the lack of awareness among the public, government, and wider stakeholders in society, which leads to un-aligned technology design not allowing for user resistance, support, or responsible communication (brought forward by the academic experts). However, the lead users/consumers were generally aware of the risk of losing agency, control, and power over their lives. In fact, there was a fear of technology taking over their lives entirely. Therefore, future human-centric IE scenarios will need to ensure technical mechanisms are in place to hold all stakeholders informed and accountable for their actions.

The above fear also came up in the discussion around barriers and ways to overcome the challenges presenting themselves when striving towards adopting and realizing a more humane or humanity-centered paradigm. While strengthening and preserving human agency in the paternalistic principles clearly appeared to be top of mind from the user/consumer perspective, the expected future technological developments of pervasive and intelligent environments also instilled a feeling of powerlessness in humans/end-users. This powerlessness was considered to remove the ability to control one’s personal environment, losing confidence in one’s thoughts, feelings, and perceptions by complying with paternalistic rules of engagement dictated by the technology. This barrier was not detected in future signals originating from other multi-disciplinary groups [57], the literature review [56] or the secondary future research/horizon scan, but was strongly raised in the end-user workshop.

As already implied by the differentiated visions concerning who should act and how, and what the need for human agency really implies, the lack of a clear vision, established frameworks, and methods for human empowering solutions in IE are seen as a critical obstacle to achieving genuine empowerment. Yet, several proposals were raised. The most innovative approaches grounded in human-centric theories in this respect include a no-harm framework based on human and social factors, and control theory to establish stop mechanisms. There are indications of a need to build community interaction into the participatory sensing and IoT loops; however, still no specific theoretical principle or metric is proposed for all citizens’ empowerment, and the interviewed experts struggled to envision a shared community-oriented framework with specific measures that can evaluate human empowerment.

Possible, feasible, and desirable visions

As outlined in Sects. “Methodology” and “Results”, the different yet interdependent STEEPV systems and domains also need to be jointly considered when discussing what is possible, what is feasible, and what is desirable. The possible vision: When exploring future scenarios for a genuinely humane or humanity-centered intelligent environment, the possible vision is one where humans, society, and the environment will be primarily considered in designing the future Intelligent Environment technical design frameworks and methods. According to [40], these aspects and questions are more recognized and situated in the context of science and ecology. Promising examples, originating from secondary foresight sources, industry, and academics, amplify human experience in the physical environment or contribute to eco-efficiency. Signals include Intelligent Augmented Reality, Intelligent Amplification, Immersive Virtual Spaces, or Augmented Urban Reality, where you get real-time many-ways knowledge and space interactions, a.k.a. spatial data loops (refer to Fig. 2 for more details).

The feasible vision: The question of what is feasible is more situated in the realm of technology and economics (based on [40]). This is highlighted by the fact that industry stakeholders - having economic incentives to maintain the current models and solutions - state that the existing Human-Centered intelligent environment is already ethical and humane, and there is little need to improve. There is an acknowledgment of unintended harm occurring due to digital power imbalances; however, this will correct itself by consumer demand. In the medium time horizon, “Internet of Everything” will lead to everything with a chip to be connected and integrated, communicating in a seamless manner within a physical environment (refer to Fig. 1). Examples of longer-term impact technologies are ambient intelligence, general artificial intelligence, and digital twin technologies that enhance the way people interact with their environment to promote safety and enrich their lives (or make their lives simpler) (refer to Fig. 2). The human-centric mechanisms require minimal human involvement, with a reputational check and consumer preference steering the technical vision in a humanity-centric direction.

The desirable vision: Finally, the vision towards what is preferable and desirable is where social, political, and value domains meet and where aspects of governance, policy, and regulation are of key importance [40], strongly interlinked with science and technology. In this respect, the participating academics introduce automated citizen-sensing loops and collective intelligence as long-term impact technologies, where citizen participation, genuine user representation, and AI in participatory democracy bring in meaningful citizen involvement and anticipatory governance (refer to Fig. 2). The lead users brought in the vision towards diverse and non-discriminatory technologies, equipment to avoid surveillance, and inclusive interaction design as innovative examples. Thus, in this vision, the Intelligent Environment should be geared towards protecting against negative social and environmental impacts by introducing a public good/civic design logic translated into appropriate technology solutions. This includes considering how future IE scenarios will impact the community, the environment, and how to account for the needs of different types of users and stakeholders.

How can the developments be steered in the desired direction, supported by a uniting vision?

Evaluating the potential impact of different trends, predictions with alternative futures, and prioritized actions suggested by experts in the interviews helped distill the following set of recommendations. The findings emphasize the necessity of incorporating multiple perspectives to ensure the successful delivery of equitable and fair outcomes in the next-generation IoT and IE. In this respect, the uniting vision requires a systemic alignment and policy-pushed change, centering around a number of key aspects:

  • Technical Feasibility: The research indicates a unified agreement that there are no significant technical barriers hindering the design of human-centric Intelligent Environments that prioritize human control and meaningful agency. The desirable vision is already feasible from a technical standpoint.

  • Incentives for Industry: Suitable incentives should be established to encourage industry and technological players to construct the next-generation, human-centric IE in a way that promotes meaningful agency for users. Different and the right type of design incentives are needed, as the existing economic incentives are a real barrier. Such revised incentives could be based on the Sustainable Development Goals, which include environmental and social protection [34].

  • User Empowerment: True multi-disciplinary technology integration necessitates an empowered user representation. Users should have a genuine voice and the ability to influence design decisions through committees or civic technology design approaches.

  • Continuous Evaluation and Co-creation: Future scenarios for intelligent environments should undergo continuous technical evaluation and improvement through co-creation and participatory design. Meaningful involvement and democratic participation of communities, along with incentives that protect the public, are crucial for achieving a genuinely humane IoT loop.

  • Trust, Safety, and Agency: Different degrees of trust, safety, and agency need to restrict the technical model of intelligent environments. Policies, regulations, and governance structures should be established to enforce and guide such development.

  • Transparent and Accessible Interfaces: Interfaces should be transparent and accessible to the public, giving communities visibility into how the system operates. These solutions promote understanding and allow joint assessment of the system’s decisions.

In general, all experts and lead-users agree that in order to foster meaningful change, it is imperative to foster legally forced ethical safeguards to protect against competing interests and corporate values dominating. Incorporating diverse interests and perspectives requires a systemic approach. This entails developing comprehensive frameworks, policies, and governance structures that can accommodate and integrate the interests and values of various stakeholders. A systemic approach recognizes the interconnectedness of different elements within the intelligent environment ecosystem and seeks to address the inherent complexities through a holistic lens.

Limitations and future research

This study deployed the mixed-method approach to provide a rich and nuanced multi-disciplinary understanding that can serve as the foundation for future IE scenarios that are coherent and grounded in humanity-centered theory. The triangulation of horizon scanning combined with qualitative approaches and the construction of future imaginations allowed for a more open and comprehensive exploration of emerging trends, issues, and alternative visions. However, we acknowledge that the method has certain limitations:

  • The expert interview samples were selected using a purposive sampling method, thus introducing a possible sample bias.

  • The use of interpretative and situated knowledge to analyze the qualitative interviews and focus groups may introduce researcher bias.

  • The Horizon scanning method relies on available data, which may not capture the full breadth of potential future developments [31].

  • The selection of alternative future imaginations and visions used a normative approach, which relies on the perspective and values of the participants, potentially limiting the range of possible futures considered  [31]

  • Due to limited resources and time constraints, it was not possible to conduct a larger number of interviews and focus groups. This limitation might have limited the representation of diverse perspectives.

Follow-up research is therefore needed to to enrich further and validate the findings. The next step for future work is further to develop concrete future scenarios based on the identified trends and drivers, barriers and future visions, as part of a follow-up, participatory process involving multiple stakeholders. Such scenarios can help to set clear goals and identify which intermediate steps and milestones the road-map towards reaching the supported future vision should contain. Future research should, in particular, include a participatory process where multiple stakeholders (including experts, diverse segments of end-users, and other stakeholders) drive the assessment of future scenarios in order to achieve consensus. As the current research has shown, different stakeholders have different goals and interests. Making these explicit and facilitating the creation of a shared direction and understanding the implications of different design decisions from different angles is therefore considered crucial to steer developments towards more humane and humanity-centered intelligent environments. In follow-up scenario development research, feedback about what would happen under various future contingencies will be collected and developed further. Through future workshops, the highest potential “success scenarios” can help identify plausible, preferable, and possible outcomes, but also undesired outcomes and associated triggers [31]. The results from a ‘success’ scenario approach might indicate how to steer towards the desired direction, supported by a uniting vision [31]. In addition, as mentioned in Chapter 4, some of the discussed developments and outcomes bear relevance beyond the specific IE focus, and may be applicable to other areas of increasingly digitalized societies and “digital life” more broadly. Follow-up work is therefore needed to better understand which of these outcomes and recommendations are more generic and which role they can play in future visioning beyond Intelligent Environments as such.

Conclusion

This paper aims to build a forward-looking and shared understanding of developing a genuinely humane or humanity-centered Intelligent Environment. We employed a systemic Foresight (ForSTI) methodology considering diverse disciplines and perspectives to achieve this objective. As a starting point, we conducted a Horizon scanning exercise, combined with qualitative methods that engaged multi-disciplinary experts and lead users/consumers. The horizon scanning analysis identified emerging trends, anticipated potential barriers, and solutions, and assisted in envisioning possible, desirable, and feasible future visions for intelligent environments that prioritize humane and humanity-centered principles.

In conclusion, the future trends, developments, drivers (RQ1), and important barriers (RQ2) that can hinder the realization of a truly humanity-centered intelligent environment depends as much upon understanding the scope and purpose of the incentives for the dominating commercial model for Intelligent Environment technologies as upon developing the technical mechanisms that genuinely can protect humans and society. The technical foundations for the existing Intelligent Environments are underpinned by a digital surveillance economy with incentives to monetize human individuals, governments, or communities’ behavior for the sole benefit of commercial actors [59]. A prevailing concern is the issue of dominant decision-makers exploiting or taking advantage of minority groups, regular citizens, or communities under the umbrella of  technical enchanted determinism [8]. Future visions and the associated solutions present frameworks, methods, and mechanisms that safeguard and protect humans and society against envisioned adverse developments, where ethical violations, manipulative social/human in the loop, and uncontrollable IEs negatively affect human and societal well-being. Barriers are manifold and mainly not technical, but rather revolving around balancing the diverse interests in technical configurations and processes, such as upgrading engineers’ education, formalizing and socializing ethical frameworks and guidelines, and aligning technical standards and audits to govern and achieve accountability among all stakeholders managing the technical environment.

When grouping which visions and future developments are feasible, desirable, and possible (RQ3), the most feasible has weak restraint and resistance from the general public, citizens, and users, allowing for technical enchanted determinism fueling a race towards more smart surveillance seamlessly sensing and communicating, without genuine human acceptance, awareness, and trust. The possible vision includes technical frameworks that prioritize and augment human and environmental aspects, with technical solutions that include amplified life experiences, direct control, and eco-efficiency. The desirable vision is where citizen participation, genuine user representation, and AI agents assist with anticipatory governance and meaningful involvement. The human role is intrinsic, representative of a diverse population, and built into the technical solutions, i.e., a participatory community-in-the-loop that is co-created with diverse citizens. Solutions to avoid or opt-out from surveillance are provided, and legally enforced ethical safeguards prohibit intrusive monitoring or surveillance that exploits humans’ most intimate and personal spaces and lives, or social and environmental living conditions.

Lastly, we can steer developments in the desired direction (RQ4) and achieve a united vision by anticipating future scenarios in a participatory Foresight process that considers multiple perspectives. By carefully considering the potential challenges and opportunities ahead, we aim to have laid the foundation for a strategic roadmap that will guide us toward our desired future state, a truly humanity-centered intelligent environment. The emerging developments identified in the horizon scanning point towards more multi-disciplinary alignment on the need to act to build scenarios that strengthen the ability to preserve human and societal agency to a greater degree in future technical Intelligent Environments. In practice, the usefulness of the horizon scanning exercise, and the description of future visions that are plausible, possible, and desirable, can provide a multi-disciplinary working group with a range of success scenarios to establish the most desirable Human-Centered technical translation in future Intelligent Environments.