1 Adaptive Systems – The Need for an Interdisciplinary Perspective


Ivo Benke, Alexander Maedche, Jella Pfeiffer, Christof Weinhardt

Adaptive systems based on information technologies (IT) and artificial intelligence (AI) capabilities have become ubiquitous in almost all areas of our private and business lives. Think of social media feeds that adjust their content to our behavior or robo-advisors that adapt their buying behavior according to the stock market and our financial preferences. Similarly, fitness, lifestyle, and healthcare apps continuously track humans to monitor and assess personal fitness, nutrition, and health status in order to adapt context-specific recommendations (Alawneh et al. 2023; Noorbergen et al. 2019). Adaptive systems are also finding their way into companies, e.g., shift plans dynamically adapt to the requirements of manufacturing sites such as production demand at a given time or blue-collar workers’ physical abilities.

Overall, adaptive systems hold enormous potential to create individual, organizational, and societal benefits. However, their advanced adaptive IT capabilities may be controversial since they also represent social, ethical, and economic threats to individuals, groups, organizations, and the entire society. For example, they can lack transparency on how they arrive at their adaptation behavior (e.g., for financial advice), or they can foster addictive usage behaviors through highly engaging interfaces (e.g., on social media apps). Besides the information systems (IS) discipline, other disciplines such as economics, management, psychology and neuroscience, and computer science are researching adaptive systems from a social and technological point of view. In doing so, they have developed different perspectives on this socio-technical phenomenon, with a diverse understanding of the concept, its characteristics, and its boundaries. Given the importance and ubiquity of adaptive systems in our lives, there is a need for a shared understanding to guide future research on adaptive systems across single disciplinary boundaries to contribute to the greater benefit of individuals, groups, organizations, and society.

Stepping back in time, the concept of adaptation originates from the field of biology. The famous evolutionist Charles Darwin described the concept of Darwinian adaptation as an organism’s feature that was functionally designed by the selection process of evolution acting in nature (Thornhill 1997). In general, it describes an organism’s ability to adapt its characteristics and behavior in an environment within a given time frame by natural selection. With the rapid progress of IT and the proliferation of adaptive systems, a range of different disciplines have adopted the generic adaptation concept from biology, formed their own discipline-specific understanding of adaptive systems, and employed it within their subjects of investigation and their disciplinary knowledge contributions. Unsurprisingly, however, the respective disciplines have defined the concept of adaptive systems very differently depending on their own perspectives and self-image.

Against this backdrop, the goal of this discussion article is to highlight the different perspectives of the above-mentioned disciplines when investigating adaptive systems from a social and technological point of view. It becomes obvious that the different disciplines, depending on their individual roots, not surprisingly place more importance on either the social or technical dimensions of adaptive systems. Building on this insight, we propose a first step towards a shared understanding of hybrid adaptive systems that equally balances the social and the technological dimension. This conceptualization may lay the foundation for establishing an interdisciplinary research perspective for this socio-technical phenomenon. We argue that interdisciplinary research on hybrid adaptive systems will ultimately contribute to the greater benefit of individuals, groups, organizations, and society.

2 Management


Petra Nieken, Martin Klarmann

Management sciences contribute to the knowledge of adaptive systems by studying their implications on organizations (i.e., institutions or companies) – which are an important element in our lives. Specifically, most of us experience organizations in at least two roles: as employees and as customers. To illuminate a management perspective on adaptive systems, we, therefore, turn in the following to Human Resource Management and Marketing as two management disciplines that focus on humans in the role of employees and customers.

2.1 Human Resource Management Perspective

Adaptive systems challenge classical management paradigms and call for a thorough investigation of hybrid management practices, since their implications for managers and organizations are manifold. Adaptive systems with a system-driven adaptation need a clear and transparent formulation of objectives and goals. Given that the organization’s and its members' goals often differ, this is not an easy and straightforward task. How do organizations balance conflicting goals and perspectives and align them within an adaptive IT system? Without transparency and clear guidelines, such systems will be prone to manipulations and potential biases leading to distrust and fairness concerns. However, even with clear objectives and goals, humans will adapt to the adaptive IT system and try to challenge the underlying mechanisms. Thus, we will observe a co-evolution and co-evolvement of humans and technology in adaptive systems in management. Today, we see a rapid integration of technological devices and assistance systems recording biosignals in order to better understand and assist humans in the workplace. It is of utmost importance to understand the underlying mechanisms and behavioral effects to help design efficient and fair adaptive (management) systems.

Human Resource Management focuses on the human and aims to establish a motivating, safe, and inspiring work environment. Systems that adapt to individual needs can help fulfilling this goal. On the other hand, handing over control to an adaptive system can spark fear and reduce trust in managerial decisions. Consider, for instance, the core organizational tasks performance evaluation and leadership. Adaptive systems can potentially reduce human bias and process larger amounts of information leading to a fairer performance evaluation. By combining performance data with physical or psychological parameters of the employees, these systems can evaluate employees and match them with the most fitting position (see e.g., Bartholomeyczik et al. 2022 for the usage of EEG in work contexts). However, employees might feel inclined to alter their behavior strategically (Hausfeld et al. 2021). Adaptive systems can also assist leaders to tailor their communication to the individual’s needs (see, e.g., Nieken 2022b and Nieken 2022a). Given that leadership is typically seen as a task that needs to be assigned to a human, being evaluated and led “by an algorithm” might lead to lower employee engagement and trust in the organization. Thus, transparency and a clear definition of goals, power, and decision rights is crucial in human resource management to ensure the acceptance of adaptive systems.

2.2 Marketing Perspective

In marketing, adaptive systems play an ever-growing role. A prominent example everyone is familiar with are recommendation engines. Online retailers learn the preference structures of their customers from their search behavior and earlier purchases. Based on these preferences, they customize which products the customer sees in a given search and in what order. Another example is “programmatic advertising” where algorithms bid for the possibility of showing certain content to potential customers. The bids are based on an ad's expected performance, given the potential customer's current search history (among others). More will become possible. For instance, we have found that it is possible to predict customer mood based on small audio snippets, which could make voice shopping mood-adaptive (Halbauer and Klarmann 2022).

It is a key tenet of marketing theory (and well substantiated through empirical results) that marketing (and companies) will only be successful if their offerings satisfy customer needs. Exaggerated claims (or worse, manipulation, (Klarmann 2020)) can easily backfire. For the design of adaptive systems that are to be used by customers, this implies that – at their core – they need to be adaptive to customer needs. Therefore, from a marketing perspective, adaptive systems need to fulfill a number of requirements that may differ (at least in terms of their prioritization). Technically, they should be based on validated measures and causal inferences (instead of mere predictions).

Regarding the measures used, human–computer interfaces often rely on measures that are relatively far from measuring the needs that really drive customer behavior (e.g., eye movements, mouse movements, physiological reactions). The risk that comes with these measures is that – although they may be correlated to needs (which would need to be established) – they are likely to be confounded with a number of other variables (and possibly even other needs). From the customers' perspectives, this may create surprising (and seemingly random) adaptive responses from the system.

Regarding causal inference, systems that are merely based on prediction (i.e., correlational data) will always be highly dependent on context and time. This may quickly create unplanned (and possibly inappropriate) customer experiences if the circumstances change. Everyone who has shopped using recommendation engines has faced countless situations where the recommendations seem to be completely useless. This is often based on algorithms that are simply correlational.

3 Economics


Clemens Puppe, Julia Nafziger

Digitalization has an impact on nearly every aspect of human interaction. Extending the research territory to the intersection of digitalization and economics thus appears to be a natural and worthwhile endeavor. And indeed, both research fields try to understand and design complex interaction networks by developing appropriate mechanisms and systems (Blume et al. 2015). Moreover, when integrating our understanding of adaptive systems into economic research, it becomes clear that numerous economic problems are instances of complex adaptive systems themselves.

3.1 Adaptive Systems and Interactive Decisions

Complexity arises, among other things, from the network of interacting economic agents. Their aggregate dynamic behavior can be described by assessing individual behavior, strategies, and decisions (Holland and Miller 2023). However, as these authors also argue, adaptive agent behavior cannot perform at an optimum level during all iterations of decision making. Indeed, even if adaptive systems are designed to help agents perform better, non-optimal states are temporarily unavoidable due to the time needed to adapt. Important economic examples are the interaction between agents in networks, for instance, between firms that compete on platforms. The very idea of the mechanics of adaptation necessitates an analysis of satisficing rather than optimizing systems (Simon 1957). Improvement rather than optimality is the goal that adaptive systems need to serve. This poses a number of challenges for modeling as we have to focus on non-linear feedback, strategic behavior, as well as individual and spatial heterogeneity in changing time scales (Levin et al. 2013). The analyses also must account for agents' ability to change and to learn from experience. Examples are the expanding innovations of information technologies and novel applications that are continuously integrated into our daily lives and impact strategies in decision making processes. It requires opening the perspective on truly interdisciplinary research to address these challenges in economics.

3.2 The Use of Adaptive Systems to Design Digital Nudges in Complex Situations

Complexity does not only characterize networks of interacting economic agents, but also individual decisions: Even when making a very simple decision, an individual must take many decision-relevant factors into account and reach a decision within a very short period. Individuals who pay limited attention or face other cognitive limitations may not consider all these factors (for an overview of models of limited attention see Gabaix (2019)) and hence make suboptimal decisions. While adaptive systems might not achieve optimality in such complex environments, they can help to improve decisions. For example, when individuals pay limited attention, adaptive systems can be used to design nudges, that is, tools that help to steer individual decisions in the “right” direction without interfering with the freedom of choice (Thaler and Sunstein 2009). Specifically, in recent years, digital nudges have become popular (for an overview of different types of nudges see Hummel and Maedche (2019) – encompassing, for example, adaptive digital tools such as app-based goal setting (e.g., Loeschel et al. 2020), or computer games (Koch et al. 2023)). Such systems allow to send individualized feedback or reminders and adapt to experience. Challenges in designing such nudges in complex environments arise from the presence of behavioral or attentional spillovers on non-target behaviors (cf., Altmann et al. 2022; Koch et al. 2023; Trachtman 2021) – a challenge that adaptive systems might be better suited to tackle than traditional tools.

4 Psychology and Neurocognitive Sciences


Ulrich Ebner-Priemer, Manfred Herrmann, Benjamin Scheibehenne

Psychology and neurocognitive science allow us to explore the user as the (inter)acting subject in adaptive systems through theoretical concepts and empirical proof of why and how human subjects behave in adaptive IT systems. On a conceptual level, this might be achieved by inserting the cognitive architecture at the intersection between humans and machines. This means that machine behavior might foster (instead of just substitute) human decision making processes and that human decision behavior will translate to the design of adaptive systems. On a methodological level, we argue that psychology will deliver the basic building block we need to make the whole machinery work. This covers the following aspects: First, providing behavioral and biosignal data to obtain information on whether a system is adaptive (or even needs adaptation). Second, delivering the cognitive and statistical models to build a framework for experimental research on adaptive systems. Third, evaluating the effect and effect sizes of adaptive system interventions in individual and social behavior using human activity measures. Generally, we argue that successful adaptive information technology requires a (better) understanding of human perception and behavior (e.g., how much information can people process, which representation format fosters comprehension, how do situational factors and emotions influence perception and preferences). The following three sub-sections present three different perspectives on the role of psychology and neurocognitive sciences in adaptive systems research.

4.1 Measuring Psychological Phenomena for Adaptive Systems: Validity and Ethical Concerns

The task of measuring psychological phenomena can range from slight to extreme difficulty. This originates from the breadth of psychological phenomena, namely behavior, emotions, and cognition. Health psychology is an example of where adaptive systems can be applied easily, as health behavior can be measured continuously in daily life. Sedentariness, defined as prolonged sitting, causes not only physiological health issues but also negatively affects mood and the ability to work (Giurgiu et al. 2021). Adaptive systems can use accelerative sensors to measure body posture in everyday life, classify prolonged sitting periods in real-time, and trigger alarms to study the underlying mechanisms or to implement change (Giurgiu et al. 2020). Such adaptive systems require the possibility to measure and analyze sedentariness directly, in this case via accelerative sensors. In other words, we can measure and analyze the behavior itself in a valid way in everyday life. In psychology, those methods often share the term Just-In-Time Adaptive Interventions (Nahum-Shani et al. 2018).

Unfortunately, other psychological phenomena are more difficult to monitor in daily life. An example from the mental health context is the automated real-time prediction of upcoming illness episodes. Such an adaptive system would track psychological phenomena of interest, analyze them in real-time, activate alarms in case of emergency, as well as adapt thresholds of alarms in case of false or missed alarms (Mühlbauer et al. 2018). In such a scenario, mobile sensing of bipolar disorder is a prime candidate (Ebner-Priemer and Santangelo 2020), as mobile sensing parameters are closely related to the psychopathology of interest (e.g., altered communicativeness). Fortunately, communicativeness, operationalized via multiple parameters, including incoming and outing phone calls and text, for example, has revealed a significant, empirical relation to illness episodes. However, the validity of the real-time measured proxy of communicativeness remains unsatisfactory. Human communicativeness encompasses much more than just communication with the smartphone, although this proportion has tremendously increased over the last decade. Nevertheless, direct communication with others, communication in groups, communication via phones or (non-smartphone-based) video conferences, facial expression, gestures as well as eye-contact during communication are integral components of communicativeness. Smartphone communication, as a proxy with limited validity, represents just a portion of human communicativeness and does not measure the construct itself, in contrast to our example of sedentariness. Other psychological phenomena, especially cognition and emotions, are much more difficult or even impossible to track in daily life in real-time. Whereas peripheral physiology has been successfully used for decades, its validity must be questioned if always the same physiological measure is used as a proxy for different constructs, such as heart rate (variability) for stress, anger, relaxation, flow, engagement, affectivity, just to name a few.

Ethical and privacy concerns can also hamper some assessment possibilities which would be feasible from a technological point of view. Continuous audio recordings are highly limited in several countries (e.g., Germany), resulting from restrictions specified, e.g., by General Data Protection Regulation (GDPR) and the Telecommunications Acts. However, psychological phenomena, such as mood, stress, or motivation, could be approximated using the automatic extraction of features from audio signals, a method mainly applied in the affective computing research community. Recent advances in AI might enable real-time analyses and feature extraction on the mobile devices themselves, storing just the extracted pseudonymized features.

4.2 Decision-Making Perspective

Human decision making can be understood as an interaction between individual cognition, such as perceiving and processing information on the one hand, and the structure of the environment on the other hand (Simon 1990). This interaction implies that behavior and thus revealed preferences are inherently context dependent (e.g., Tversky and Simonson 1993). If the decision context itself changes as a function of individual behavior, as is the case for adaptive systems, interesting feedback-loops occur. This may yield malign consequences such as filter-bubbles, group polarizations, or herding behavior, but it may also foster sound decision making, for example through decision support systems that provide information in a transparent way and thus help people cut through the clutter (e.g., Ruoff et al. 2022). Adaptive systems can also help decision makers to overcome inherent cognitive capacity limits, e.g., with respect to working memory or information processing (Olschewski et al. 2018). As an example, one may think of adaptive sorting and screening algorithms that narrow down increasingly large (online) assortments and thus allow consumers to quickly find options (products, services, romantic partners, etc.) that meet their preferences (e.g., Scheibehenne et al. 2010).

4.3 Going Beyond the Behavioral Evaluation: The Impact of Human Brain Activity Patterns

Concepts derived from behavioral economics, such as nudging food choices, might provide the ground for adaptive systems whereby they can be able to shape a subject’s behavior by combining both technical and societal cueing. As suggested by Hercberg et al. (2022), one of these health nudges is a Nutri-Score, consisting of a color-coded front-on-package nutrition label, which was implemented as a voluntary service in European grocery setting environments. It has been demonstrated that this 5-colour label effectively guides consumers dietary decisions towards healthier choices at the point of purchase (Andreeva et al. 2021). If a color label promotes dietary decisions towards healthier food items, it is interesting to understand which part of the decision making process is affected by the label and how it contributes to the respective decision to adopt a healthier lifestyle. Thus, it has to be investigated which attributes of a food item, such as the taste of the product, the given price, or its subjective valuation or different health attributes are affecting the individual behavior to make a choice for the respective food item. Whether or not adaptive systems will lead to sustainable behavioral changes highly depends on how an intervention is able to change the neural architecture underlying the intended behavioral change. By providing a cognitively inspired and neurobiologically grounded framework, the neurocognitive perspective might significantly add to the understanding of technically based adaptive systems. With respect to the above introduced labeling of food items to induce healthier dietary decisions, data from neurocognitive experiments using functional magnetic resonance imaging (fMRI) have demonstrated that color-coded (traffic light) nutrition labels significantly alter the subjects’ valuation of healthier food items mediated by an increased WTP (Enax et al. 2015). The behavioral change translates into a stronger activity and functional coupling in neuronal networks associated with control of inhibitory behavior (such as the ventromedial prefrontal cortex) and self-control in decision making (such as the inferior and dorsolateral parts of the frontal cortex). A significant change in the functional connectivity of brain networks is associated with the valuation and reward expectation of food items and with the inhibition of unhealthy choices signaled via color codes. This data demonstrates how adaptive systems interventions might be reflected in the neural architecture.

Although the add-on value of neurocognitive approaches for the understanding of how adaptive systems shape human behavior is still under debate and strongly affected by diverging theoretical concepts, statistical models of human decision making as well as experimental designs of decision processes, there is increasing evidence that analyzing brain behavior will significantly add to making the design of adaptive systems successful. Understanding the neuronal framework will not only help to understand how adaptive systems work but will also help to get an idea of how and why human behavior such as adopting healthier food choices systematically escapes rational choice models that are derived from pure technically inspired scientific approaches (Mobbs et al. 2018).

5 Computer Science


Michael Beigl, Felix Putze, Tanja Schultz

In computer science, the term “adaptation” refers to a process in which a technical system adapts its behavior to individual users based on three sources of information: the context of use, the environment, and information about its users (Feigh et al. 2012). The latter may include accumulated big data of targeted user populations as well as current and enduring characteristics of individual users. Computer science is concerned with the implementation of the processing of input, the implementation of the adjustment strategy to change and adjust the input, and the generation of output from the input and the adjustment strategy. While these three components can be conceptualized as individual parts of an adaptive system, they share a lot of methods and challenges. For example, machine learning is a key concept in all parts for the interpretation of the heterogeneous input signals, for the decision making in the adjustment process, and for the generation of output. A joint challenge is the handling of diverse platforms, from mobile systems to mixed reality.

The design of adaptive systems is investigated in many research communities in computer science, e.g., in the field of speech communication systems that adapt to speakers (Leggetter and Woodland 1995), domains (Samarakoon et al. 2018), and languages (Schultz and Waibel 2001), in human–computer interaction (HCI) (Avalos-Viveros et al. 2018; Benyon and Murray 1993; van der Zwaag et al. 2010), in robotics and embodied agents (Billard et al. 2007), in software engineering (Kephart and Chess 2003; Kramer and Magee 2007), in ubiquitous computing (Coutinho et al. 2021), and in interactive systems with applications that range from clinical support (Sutton et al. 2020) and health and nursing care (Huter et al. 2020) to everyday assistance at home (Graf et al. 2009) and everyday activities at large (Ramirez-Amaro et al. 2017), just to name a few. In the following we briefly introduce a cognitive systems and pervasive computing perspective.

5.1 Cognitive Systems Perspective

In the field of cognitive systems and AI, cognition-enabled agents are coined to perceive their context and environment through sensors and act upon that environment through actuators (Russel and Norvig 2003), and such agents are characterized by the ability to learn about, from, and with humans. This ability to learn about, from, and with humans is crucial for cognitive systems interacting with different users in different contexts or environments and doing so over extended periods of time. In such cases, the cognitive system must deal with both intra-individual variations in user states and traits, and inter-individual differences between users.

In the following, we will highlight selected trends and advancements in the field of cognitive systems. In particular, we focus on Biosignal-adaptive Cognitive Systems. State-of-the-art systems as adaptive systems apply the human-system-interaction-loop concept, in which the cognitive system as defined as an adaptive IT system receives and interprets multimodal biosignals of the human user (Schultz et al. 2013). In such biosignal-adaptive cognitive systems, biosignals are processed and classified into user states (e.g., emotions, stress, workload, mental tasks) and traits (e.g., identity, gender, personality) to adapt the interaction strategy such that it best supports the individual users as adaptive social entities in their aims and needs. At the same time, the cognitive systems provide transparent and low-latency feedback to the users such that users may intentionally moderate their own biosignals to tailor the system output to their needs. We refer to this bilateral adaptation process as “co-adaptation”. We will introduce two examples of biosignal-adaptive cognitive systems: (1) biosignal-enabled speech communication systems that process speech-related biosignals from brain activity with very low latency such that users can hear themselves speaking even when they are only imagining to do so, and (2) biosignal-adaptive mixed reality interfaces for which adaptation is crucial to avoid user distraction as well as to cope with rapid changing context and limited bandwidth for explicit interaction.

5.1.1 Biosignal-Enabled Speech Communication

Speech is a complex process that starts in the brain and ends with respiratory, laryngeal, and articulatory gestures for creating acoustic signals of communication. Speech-related biosignals can be measured at the level of the nervous system, muscular action potentials, speech kinematics, and sound pressure. Biosignal-enabled speech communication systems convert these biosignals into text or synthetic voices by replacing the acoustic signal processing frontend of automatic speech processing with methods tailored to biosignals while leaving the modeling back-end of natural language processing unchanged (Schultz et al. 2017). This approach opens up new avenues in speech communication systems. It makes speech processing possible in the absence of an intelligible acoustic signal, allowing for silent speech interfaces (Denby et al. 2010) that give a voice to mute people (e.g., laryngectomies), and it processes speech when acoustic sound is not desirable (e.g., to avoid disturbance or preserve privacy in public spaces), or subject to noise (e.g., adverse environments, under water, masks). Furthermore, since speech-related biosignals precede the acoustic output by tens of milliseconds, they can provide instant feedback to a human in the system-interaction-loop.

In our work, we take advantage of these benefits by capturing articulatory muscle activities using surface Electromyography (EMG) and encoding the resulting electrical biosignals into representations directly decoded into audible speech signals. Since the EMG signal appears approximately 60 ms prior to articulatory movements, the latency between system input and output can be shortened, thereby improving face-to-face communication quality. The almost instant auditory feedback increases articulatory awareness, which is useful for speech therapy and language learning. Also, since EMG captures muscle activity rather than acoustics, EMG-based systems can handle silent speech where a speaker moves the articulators as if producing normal modal speech but suppresses the pulmonary airstream so that no sound is emitted (Janke and Diener 2017).

5.1.2 Biosignal-Adaptive Mixed Reality Interfaces

Capabilities for adaptation become especially important in systems that exist on the mixed reality continuum (Milgram et al. 1995), including augmented reality (AR) and VR. The reasons for increased need of adaptation are threefold: First, when rich, immersive content is generated there is an inevitability of stimuli in mixed reality that can overwhelm or distract users and that cannot easily be avoided by looking away from the screen. Second, there is the dynamically changing context through the continuous generation of new content in the case of VR or the mobility in case of AR. Third, there is the limited bandwidth for explicit user interaction, as the typical modalities for user input, gestures, or speech cannot convey all types of information very quickly and may require forms of interaction which are not always available.

For adaptive VR, researchers have explored ways of using adaptation to tailor an experience to the user profile (Baker and Fairclough 2022). A typical aspect of the user profile, besides characteristics such as gender or age, entail the user’s mental workload level, which can be monitored through EEG (Tremmel et al. 2019), fNIRS (Putze et al. 2019), or other physiological signals (Chiossi et al. 2022) and which can be used to manipulate the visual complexity of the virtual environment accordingly (i.e., reducing the number of elements). Other types of adaptation rely more on the detection of individual events, such as responding to different types of errors in the VR scene (tracking errors, rendering glitches, etc.), which can be detected from the brain activity following the occurrence of such events (Si-Mohammed et al. 2020).

For adaptive AR, a strong focus has been on attention as a target variable, as this limited cognitive resource especially comes under stress in AR settings where the complexity of the real world is complemented by virtual stimuli that can be equally complex. For example, Vortmann and Putze (2020) demonstrated the feasibility of EEG-based attention adaptation for a smart home control system that could improve usability significantly. While the research-grade devices for such applications are currently still unwieldy and not suited for extended use, further research has already shown that the same principle can be applied to mobile systems: Vortmann et al. (2022) demonstrated how consumer-grade EEG could be leveraged in a mobile AR-based translation application to add attention adaptation in order to turn off visual clutter when no attention was being paid. Another domain in which real-world applicability is investigated are AR systems in learning environments, for example the use of attention-measuring glasses (Kosmyna et al. 2018).

5.2 Pervasive Computing Perspective

Today, AI plays a critical role in adaptive systems by enabling the development of intelligent algorithms that can learn and adapt to changing circumstances. AI techniques such as machine learning, neural networks, and evolutionary algorithms are used to create intelligent systems that can adapt to different situations, ultimately achieving hybrid intelligence (Akata et al. 2020). A prominent example is ChatGPT, where the contribution to research on adaptive systems lies in its ability to provide researchers with access to a vast amount of information to generate new insights. Although the system does not adapt, the large and complex context feature makes it appear to be adaptive – thus, due to the dynamic and large context of GPT (Yang et al. 2022) both system output and human interpretation adapt, making the system appear magically intelligent.

A more established domain is control systems. Control systems are used in adaptive systems to manage input and output variables and ensure that the system adapts correctly to changing circumstances. These methods can be combined, for example, with software engineering methods to create self-adaptive programs (Brun et al. 2013) or to develop them (Krupitzer et al. 2015). HCI is concerned with the design and evaluation of interaction between a human and a computer, including the interface design, the environment (ergonomics), and the interaction process. With the advent of ubiquitous computing, the HCI community is adding to its research agenda the concept of dynamicy response to the user's context. This contextualization involves adapting user interfaces and interactions based on the user's context (Abowd et al. 1999). With the addition of sensors to computer systems, the possibility of adaptation goes beyond the direct input the user gives the system and includes, for example, the environment or biological conditions (Gellersen et al. 2002). Beyond personal computer (PC)-based artifacts (laptop, smartphones, and watches, etc.), adaptation is even more necessary in robotics. Robotics researchers use adaptive systems to design robots that can operate in dynamic and uncertain environments by using a mixture of the above concepts. Social robots represent a significant advancement in the field, allowing users to interact with anthropomorphized robots (Breazeal 2003).

From the above-mentioned scientific work, a series of characteristics can be derived that can be used to classify adaptive systems, such as:

Adaptation Aspect

Description

Degree of system adaptation vs. user adaptation

The degree to which the system can adapt to the user's needs versus the user's need to adapt their behavior to successfully operate the computer system

User input and output variables for adaptation

Which and how many variables as parameters, output variables, and an adjustment mechanism define the adaptation from input to output

Sensor or system input modalities used for adaptation

Which input parameters are sensed by sensors or system input modalities to enable the adaptation

Adaptation mechanism

The adaptation mechanism contains the adaptation goals and a transfer function as an abstract instantiation of the detailed procedural transfer steps from input to output

Dynamic changes

An adaptive system dynamically changes the logic of the adaptation mechanism or is given one which changes the output variables according to external or internal contexts

Continuous vs. static adaptation

An adaptive system continuously executes the adjustment mechanism (i.e., the process from input to output parameters via the transfer function)

Flexibility

There is a range of environments and situations which the system can adapt to

Robustness

An adaptive system can continue to function even if some of its components fail or do not function optimally. It can also recover from unexpected changes in its environment

Scalability

An adaptive system can be scaled up or down to handle larger or smaller tasks without losing its adaptive capabilities

Ability to learn

An adaptive system can learn from its past experiences and improve its performance over time

Integration

An adaptive system may integrate different types of adaptive systems, such as rule-based systems, neural networks, and evolutionary algorithms, to achieve optimal performance, or it may be based on a single type of adaptive system

Responsiveness

An adaptive system can respond quickly to changes in its environment or inputs and adjust its behavior accordingly

6 Information Systems


Marc Adam, Ivo Benke, Verena Dorner, Michael Knierim, Alexander Maedche, Jella Pfeiffer, Christof Weinhardt

From an IS perspective, adaptive systems are an interesting phenomenon as they represent a contemporary class of IS that leverages state-of-the-art AI and sensor technologies to perform system-driven adaptations and in parallel has strong socio-economic impacts on social entities at multiple levels (e.g., individuals, organizations). We believe that only by considering both the social (i.e., the adaptive social entities) and technological side (i.e., the adaptive IT system) of adaptive systems, positive instrumental as well as experiential outcomes (Liu et al. 2017) with regard to individuals, organizations, and the society can be created. There are several interesting research opportunities from the IS perspective for understanding, designing, and theorizing adaptive systems.

6.1 Representation Theory and Theory of Effective Use in Adaptive Systems

Representation theory (Wand and Weber 1995) as well as the theory of effective use (Burton-Jones and Grange 2013) are interesting and relevant theories for further development in the context of adaptive systems. In the following, two specific perspectives will be briefly elaborated on.

First, representation theory describes the nature and purpose of IS (Burton-Jones and Grange 2013). From a conceptual point of view, one can distinguish three interconnected structures that IS consist of: 1) Deep structure represents meaning of the focal real-world phenomena, e.g., in the form of the data model or algorithms. 2) Surface structure enables the user to interact with the deep structure, e.g., through providing a graphical user interface. 3) Physical structure implements the two other structures, e.g., in the form of devices such as a smartphone and/or a wearable. From a conceptual point of view, further research is required to refine this rather generic conceptualization in order to make the class of adaptive IT systems able to properly define and classify adaptation elements and to perform system-driven adaptations. For example, when considering the surface structure of an adaptive IT system, many different elements may be adapted, e.g., the input controls, the navigation structure, or the layout. Similarly, the deep structure includes multiple elements that can be adapted. Overall, there is a need to establish a descriptive theory of analysis that describes and classifies potential adaptation elements of adaptive IT systems on a more specific level (Gregor 2006). This would provide a foundation for explicitly defining the design space of adaptive IT systems.

Second, building on representation theory, the theory of effective use emphasizes that users need to adapt their form of learning actions (i.e., actions a user takes to understand the system, its domain, or how to engage in an informed way with the system) to achieve effective use (Burton-Jones and Grange 2013). Effective use is defined as “using a system in a way that helps attain the goals for using the system” (Burton-Jones and Grange 2013, p. 633). Learning actions cover different activities of users, e.g., exploring training material, leveraging onboarding procedures (Ruoff et al. 2022), receiving support through help functions, or talking to peers (Lauterbach et al. 2020). Furthermore, the theory of effective use argues that besides learning actions of users also adaptation actions (i.e., actions a user takes to improve (1) a system’s representation of the domain of interest; or (2) the access to these actions) with regard to the physical and surface structure as well as the representation (deep structure) positively impact transparent interaction, representation fidelity, and finally informed action. Considering that the adaptive IT system itself continuously performs adaptation actions, the interesting research question emerges how the user’s learning actions and the system’s adaptation actions should co-evolve to achieve effective use in an adaptive system. In this context, another interesting challenge is to understand how users should react to adaptive IT systems which manipulate users in a targeted manner through their ability to adapt. In general, users must learn how to deal with the adaptation capabilities of systems.

Overall, the IS discipline can contribute valuable existing knowledge to better understand and subsequently design adaptive systems. Building on and extending this knowledge, further interdisciplinary research is required to create positive instrumental and social outcomes for individuals, organizations, and society.

6.2 Data Richness in Virtual Reality and When Using AI-based Chatbots: The Trade-Off between Opportunities and Ethical Aspects of Adaptive Systems

IS allow organizations (i.e., institutions or companies) to collect an increasing variety of data about their users’ behavior, states, and emotions. Data collection does not solely target customers but also includes the organization’s own employees, its suppliers, and collaboration partners. Yet, data-driven applications are nothing new; organizations have built adaptive systems for several decades. Just think of the prevalence of marketing efforts to personalize internal and external campaigns. However, it seems that with recent technical developments, such as VR (and the related idea of the Metaverse) or ChatGPT and other large language models, the possibilities of gathering data take an unprecedented leap, particularly regarding data variety. For example, communicating with the system through natural language provides the model provider with immensely rich information. This kind of rich exchange between user and system will even further develop when users soon start talking instead of only texting with ChatGPT because voice transmits contextual information to the communication partner, such as gender and emotions (Gunes and Schuller 2013; Rouast et al. 2021; Stern et al. 2021).

A second example is virtual reality (VR), which is accessed through head-mounted displays (HMDs). State-of-the-art HMDs include a plethora of sensors. Many of these are needed from a technical perspective for creating an immersive experience. For example, HMDs must track movement data and head position to adapt to the virtual world according to the users’ actions. Most headsets also integrate eye-trackers to track users’ gaze information and save computer power by facilitating foveated rendering (Godin et al. 2004). Recent HMD models also include biosensors such as electroencephalography (EEG) and electromyography (EMG). Even though these sensors are not essential for the VR experience, they can – if available – be used to gain crucial information about the user, such as current cognitive receptivity. With all this data at hand, users can be assisted by means of highly adaptive systems. For example, chatbots could adapt their human-like appearance to the users’ context (Heßler et al. 2022). Research has shown that users want more human-like decision support in pro-social contexts, where they regard empathy and autonomy as important decision criteria. Such user preferences could be easily learned by assistance systems that help users make pro-social decisions whenever they express their wishes and preferences in natural language. In other projects, research shows that by using only eye-tracking data collected from users who go shopping in a VR store, the system can predict based on machine learning the users’ shopping motives and point of time when help is needed (Pfeiffer et al. 2020; Weiß et al. 2023).

All in all, the possibilities for adaptation will be enormous with these technological developments. With the next generation of assistance systems, users will experience and appreciate an increasing degree of adaptation and personalization. As we know from research and our own lives, we have to expect that users will start to like these adaptions because of the convenience they offer. Even if the broad public is aware of data privacy, more adaptation and personalization will most likely entail accepting or ignoring privacy issues. However, this privacy paradox needs to be explored, as the use of new data sources might cross a threshold in terms of privacy threats. For example, users could be identified and de-anonymized (Miller et al. 2020), mental health conditions such as depression and schizophrenia might be identified (Alghowinem et al. 2013; Benson et al. 2012), and personality traits could be inferred (Evin et al. 2022). Users might perceive such risks as more threatening than what they are used to from current applications and, coupled with the possibilities of AI, they might even lead to discrimination (Pfeiffer et al. 2023). Therefore, solutions must be brought forward that show how we can use, for example, eye-tracking data, in a privacy-preserving manner. For example, Nair et al. (2022), add noise to data by simple configurations of the HMD and controllers such that attackers can no longer use it to identify the user across multiple sessions. Others include counter measures for eye-trackers (David-John et al. 2021; Steil et al. 2019) and privacy full-body motion capture systems (Pfeuffer et al. 2019). From the IS point of view, an interesting stream of research is, therefore, the user acceptance of privacy-preserving technological approaches (Bu et al. 2021).

Yet, while research can contribute to the above issues by analyzing users’ privacy behavior and making the users aware of it, the question remains whether this is enough to protect users and whether minimizing privacy threats is the obligation of the user or rather of the provider. What is more, companies’ motivation to protect user privacy might be rather low. Therefore, it could be of utmost importance to let governments and institutions take actions and think about and regulate the ethical consequences of highly adaptive systems. The EU has made a good step in this direction by proposing the EU AI Act, published by the European Commission in 2021. It distinguishes between the risks of different AI systems, banning the most harmful ones, and establishes legal obligations for AI providers of ‘high risk’ systems (Pfeiffer et al. 2023). Thus, IS researchers should think about informing politics and legal entities about possibilities but also ethical implications of adaptive systems to make these sustainable.

7 Conclusion: Towards a Common Understanding of Hybrid Adaptive Systems


Ivo Benke, Alexander Maedche, Jella Pfeiffer

In this discussion paper, scholars of the disciplines of management, economics, psychology and neuroscience, computer science, and IS have laid out their disciplinary perspectives on adaptive systems and presented specific research challenges based on their disciplinary backgrounds. There is agreement that the interplay between social entities and IT is inherently complex. Additionally, when investigating adaptive systems, theory development as well as empirical measurement is challenging. To successfully design adaptive systems, it is important to understand their characteristics and select a suitable configuration (e.g., with regard to flexibility, integration, learning ability, robustness, and responsiveness). Furthermore, there is agreement that the concept of adaptive systems comes with multiple ethical concerns, especially if an adaptation becomes a means to manipulate social entities. Overall, a key question is what the driving goal of an adaptive system should be. There might be multiple goals when social entities are involved, and balancing these goals is essential.

The discussion in the previous sections has shown that a truly interdisciplinary research perspective is required for an integrated view of adaptive systems. This should allow, in consequence, to design adaptive systems to the greater benefit of individuals, groups, organizations, and society. In order to successfully conduct interdisciplinary research, a common understanding of the phenomenon is necessary. Early definitions of adaptive systems come from control systems in the area of control and regulation technology (e.g., see Wiener et al. 2016). Control systems consist of input variables as parameters, output variables, and an adjustment mechanism that determines the adaptation from input to output (Unbehauen 2000). Simply put, the adjustment mechanism processes the input variables and adjusts the system behavior based on a dedicated transfer function. This transfer function can either be static or dynamically adapt its properties (e.g., structure and parameters) to the system behavior (Unbehauen 2000). In software engineering, multiple approaches have extended and adjusted the concept with the provision of an adaptation goal as well as observer and controller elements within the adjustment mechanism. Examples of these are the self-adaptive systems concept (Weyns 2019) or the observer-controller architecture (Tomforde et al. 2011). Taking the user into account, in the early 2000s, research started to use these concepts and build adaptive interface models (Rothrock et al. 2002) in which the user and the adaptive IT system coexist and the adaptive interface model allocates who should conduct the task at hand, the user or the machine (Feigh et al. 2012), following the “perceive–select–act” approach (Wickens et al. 2015).

Building on these conceptual roots and the discussion above, we aim to establish a cross-disciplinary understanding of adaptive systems that is valid and accepted in these disciplines and provides a foundation which allows future researchers to further investigate and develop the concept. Rooted in the different perspectives’ understanding, we therefore elevate the concept of adaptation from a purely technical to a socio-technical and human cognitive perspective in which the adaptation may originate from an interplay of (1) individual physical and physiological input factors, (2) social (i.e., interpersonal and organizational) input factors, and (3) technical input factors. Since this is a hybrid perspective, both social and technical, we call these systems hybrid adaptive systems. Figure 1 presents the conceptualization of hybrid adaptive systems. Following the conceptual roots of socio-technical systems thinking, the adaptive IT system as well as the involved social entities (individuals, teams/groups, economic institutions) co-adapt in parallel (cf. Figure 1). The adaptive IT system may actively adapt itself and automatically define the adaptation mechanism. For example, instead of a user switching a light on or off, the system can automatically control the brightness based on the environmental light conditions as well as learned user preferences. In parallel, users may themselves decide to manually adjust the brightness or adapt the perception of the brightness by themselves.

Fig. 1
figure 1

Hybrid adaptive system conceptualization

An adaptation mechanism for adaptive IT systems requires three elements: input variables, output variables, and an adjustment mechanism. The adjustment mechanism contains the adaptation goals and a transfer function as an instantiation of the detailed procedural transfer steps from input to output variables according to the adaptation goals. The input parameters are perceived through sensors or system input modalities (such as keyboard inputs) and processed by the adjustment mechanisms. Furthermore, the input and output parameters can be obtained at the individual, team, group, or institutional level. The adaptive IT system’s output variables are changed based on the adjustment mechanism. The adaptive IT system continuously executes the adaptation mechanism (i.e., the process from input to output parameters via the transfer function). In parallel to the adaptation of the IT system, the social entities interacting with the adaptive IT system continuously adapt themselves. We call this the co-adaptation phenomenon.

Let us outline two examples that explain our hybrid adaptive systems conceptualization in a simple way. For example, VR technology can be applied to treat phobias as such against spiders (Garcia-Palacios et al. 2002). When using VR devices, they scan the user behavior and the environment as input parameters and adjust the treatment (cf. the appearance of virtual spiders). In parallel, humans interacting with the adaptive VR technology learn to deal with spiders in the virtual environment, for example by ignoring them or getting familiar with them. They potentially co-adapt their real-life behavior based on this interaction experience with the adaptive VR technology. Further, a social, economic, and technical example is the AR game Pokémon Go, in which users search for virtual items in their real-world environment with their mobile phones (Shiau and Huang 2023). In this game, the system uses input parameters such as the physical location but also the social network status (e.g., number of friends) of the player to feed the adjustment mechanism, which ultimately adapts both technical (e.g., the appearance of Pokémon) and socio-economic output features (e.g., group status of a player). In consequence, using this system involves a technological adaptation of the algorithm underlying the game but also comes with social (including economic) co-adaptation realized during the group interactions and in the social group configuration.

Concluding, we have outlined the concept of hybrid adaptive systems consisting of adaptive IT systems and adaptive social entities. Continuous co-adaptation between adaptive IT systems and social entities makes hybrid adaptive systems highly dynamic. We argue that hybrid adaptive systems are by definition interdisciplinary and, therefore, should be investigated from an interdisciplinary perspective. With the collected perspectives in this discussion and our conceptualization of hybrid adaptive systems, we hope to stimulate the BISE community to further investigate this interesting phenomenon in the future.