1 Introduction and Related Work

The use of wearable devices and visual methods for monitoring relevant physiological parameters in health, sport and fitness applications leads to the emergence of new opportunities for recognizing the emotional state, the mood and the stress level of users and for optimizing user experience by individual software adaptation in various other fields, e.g. e-business, human-machine-interaction, and gaming.

Architectures for real-time situation-adaptive HCI systems have to organize the observation and real-time-analysis of data signals gathered from mobile and stationary devices, simultaneously evaluate the task-related and emotional user state, measure the cognitive load and at the same time allow exploiting additional contextual and ambient data. Such architectures have the goal to adapt and tailor both, the user interface and the content, to individual users, interactive devices, and changing work contexts.

1.1 Emotion-Recognition in HCI

In contrast to the experimental and clinical acquisition and analysis of bio-signals for patient-monitoring mobile interactive application environments require unobtrusive measurement and tracking methods with little interference into the freedom of action of the user. However, initial field studies in the forefront of major development projects in the domain of media-rich interactive systems, may permit the laboratory scale acquisition of signal types that up until now were reserved for medical purposes.

The automated evaluation and subsequent use of emotions by intelligent systems is in the focus of the field of Affective Computing, founded by Rosalind Picard. In [15] several research projects for recognizing stress, emotional engagement, and positive emotions are presented. Many of these approaches are based on visual recognition methods.

In [18] methods for measuring non-visual bio-signals are discussed. With the advent of wearable devices, such signals can also be used for emotion analysis. Such data sources include the analysis of psycho-physiological data for heart (ECG), muscle (EMG), eye (EOG) und dermal activity (EDG) [16].

In the field of usability engineering visual methods, like eye- and gaze-tracking, have been applied successfully for many years [20]. However, mature software systems for facial analysis in a very reliable way provide the grades and temporal changes of the base emotions of one or several users in real-time. The research in this area was initiated by Ekman [4] and in the meantime perfected in commercial systems, e.g. [14]. Such methods recognize the 44 typical facial muscle movements, the so-called action units (AUs), which happen in the eyebrow, eye, and mouth area. Combinations of AUs lead to the six human base emotions joy, fear, disgust, surprise, sadness and anger. Based on AU analysis new recognition methods arise steadily.

In [17, 19] new and even more precise emotion classifiers based on the analysis and combinations of facial micro-expressions (< ½ s) and macro-expressions (½ s to 4 s) are presented and discussed. They were derived from having intelligent recognition systems learn from the human assessment of emotion states of persons watched in video clips.

Visual methods can also be useful for measuring the user’s stress level. In [12] a contact-free camera-based method is discussed that compares the facial expression to images acquired during vein-based measurements of the cognitive stress-level of test persons.

An example for a non-visual method for the reliable recognition of the major base emotions is the integration of embedded wearable devices into clothes, to detect short-term variations of the pulse rate [8]. For similar purposes also commercially available devices like the Empatica E4 wristband [6] can be used.

An advanced non-invasive method for emotion recognition based on visualization of the human brain activities is presented in [16]. The system that is tested in real human-machine interaction environments uses a head-mounted hood and applies fNIRS (functional Near Infrared Spectroscopy). The achievable results are comparable in quality to magnetic resonance measurements in clinical environments.

The effects of combining several measuring modalities for recognizing emotion levels are discussed in [9].

1.2 Situation Analytics as a Software-Engineering Approach for Adaptive HCI-Systems

The design of practical approaches for using dynamic situation-specific data acquired at runtime in order to build software adaptation mechanisms that react to changing user behavior is a challenge for software engineering.

In [3] the foundations for a strongly user- and situation-aware software engineering are formulated. Chang calls this new software engineering discipline situation analytics. Situation-aware systems observe the dynamic changes of the emotional and cognitive user behavior during the use of interactive software systems.

It is the goal of this innovative software engineering discipline to devise system architectures and tools for analyzing the cognitive and emotional behavior of users within their individual context and to apply intelligent methods and technologies for a steadily running process-accompanying situation analysis. Such practical systems must be able to synchronize, evaluate and exploit the various acquired data at runtime and select and perform the appropriate dynamic adaptations. Thus, varying user requirements and preferences can be recognized and reacted upon, while users cope with their interactive tasks. Chang introduces the Situ framework that offers comprehensive modeling and runtime support for situation-adaptive software systems. In [13] the functional language Situ f is used to demonstrate, how typical situations can be captured in software development environments and how situation-specific software structures and behavior can be re-used.

In our previous work [10, 11] a prototypical system was introduced that observes users through visual and biophysical channels in order to recognize their detailed emotional reactions while interacting with a prototypical dynamic web application and to exploit the changing user mood for runtime adaptation of the application.

Recently, we have expanded the original system into a development environment for situation-aware adaptive systems. The SitAdapt system and its architecture will be presented in Sect. 2. The architecture enables the system to cooperate with the model- and pattern-based user interface development framework PaMGIS [7]. In addition, SitAdapt uses a runtime-component for intelligent evaluation of the emotion-tracking data and pattern-triggering for dynamic user interface adaptation.

1.3 Adaptive Systems in HCI

Adaptivity and adaptability in HCI systems can be viewed from different angles. With the introduction of systems that exploit individual emotions, cognitive loads and experiential preferences of the user within its dynamically changing ambient environment for software adaptations, the psychological and cognitive aspects can be addressed.

However, the more technical aspects also have to be considered, when system architectures and sustainable development approaches for contemporary interactive system environments are designed and implemented. As the use of interactive software in the everyday life of users is increasing steadily, also the number of devices and their different user interfaces is growing. The programming and the design of user interfaces for interactive applications that are able to migrate between different devices and device types is a challenging problem, because so many different contexts of use (user, platform, and environment) have to be supported [2]. The development of these multiple-adaptive migratory user interfaces (MAMUIs) confronts developers with complex requirements such as task continuity and adaptability to context changes [21].

The PaMGIS framework that is used by the SitAdapt system offers the development and runtime support for such multiple-adaptive migratory user interfaces and can react on context changes in the user’s environment in real-time.

In general, three different types of adaption can be distinguished in the field of user interfaces [1, 21]:

  • Adaptable user interfaces. The user customizes the user interface to his or her personal preferences.

  • Semi-automated adaptive user interfaces. The user interface provides recommendations for adaptions. The user has to decide, whether he or she wants to accept the recommendation or not.

  • Automated adaptive user interfaces. The user interface automatically reacts to changes in the context-of-use.

SitAdapt in co-operation with the PaMGIS framework covers semi-automated as well as automated user interface adaptation. In order to arrive at really situation-aware systems, however, SitAdapt in addition to interacting with purely user interface modeling aspects also has to take into account the more content-related static and dynamic system aspects that are typically covered by domain or business models of interactive applications.

2 SitAdapt: A Model-Based Adaptive Architecture with Pattern Triggering

In this chapter, we discuss the SitAdapt system that implements an architecture for runtime-support of model-based interactive applications with dynamic adaptation in real-time.

For this purpose, the PaMGIS (Pattern-Based Modeling and Generation of Interactive Systems) development framework [7] was extended with runtime access functions. SitAdapt accesses the PaMGIS framework, which is based on the CAMELEON-Reference framework (CRF) [2]. The CRF serves as the de-facto reference architecture for the model-based and model-driven development of user interfaces. The CRF proposes the workflow for the developer on how to transform an abstract user interface over intermediate model artifacts into a final user interface [2]. The PaMGIS structure with the incorporated CRF models is shown in Fig. 1.

Fig. 1.
figure 1

Overview of the PaMGIS models and their interrelations

The abstract user interface model (AUI) is generated from the information contained in the domain model of the application that includes both, a task model and a concept model. The AUI mainly includes the specifications of the abstract user interface objects. In the domain model and the originally rendered AUI the user interface is still independent of the usage context. After the completion of AUI modeling, the AUI model can be transformed into a concrete user interface model (CUI). The information of the context model and the structure of the dialog model are exploited by this process. For defining the dynamic aspects of the user interface, PaMGIS uses a dialog model. The dialog model is based on dialog graphs that were originally introduced by the TADEUS system [5].

In the next step the final user interface model (FUI) is generated automatically from the CUI model. Depending on the target implementation language, the FUI must either be compiled, or can be executed directly by an interpreter (Execute UI). The specification of the models is done in conformity with the Extensible Markup Language (XML) [2].

PaMGIS serves as the development and implementation environment for SitAdapt applications and as their initial launch platform. It is also used for context and device-specific adaptations whenever the application is started from or migrated to a new target platform.

2.1 SitAdapt Runtime Environment

SitAdapt uses the models and HCI-patterns, accumulated during the development of the interactive system in use and/or residing in the resource repositories as the resulting artifacts of earlier interactive system developments. The SitAdapt process allows for contextual adaptation of the applications at runtime. Thus, user experience, work quality and efficiency can be optimized.

Our earlier work on situation analytics has led to the construction of a prototypical interactive application development environment including situation-aware software adaptation [10, 11]. The SitAdapt runtime architecture, see Fig. 2, discussed in this paper, in addition provides a sound architectural basis for structuring the situation-aware adaptation process, accessing the model and pattern resources, and selecting the chain of necessary parameters changing dynamically over the runtime of the application.

Fig. 2.
figure 2

SitAdapt architecture

The recording component contains the following technologies for data recording:

  • Eye- and gaze-tracking of the users and assignment to the interaction objects and design elements of the user interface with the Tobii eye-tracking system [20].

  • Emotional video facial expression recognition analysis with the FaceReader software from Noldus [14], which supports six basic facial expressions (happy, sad, scared, disgusted, surprised, and angry). The system also recognizes the gender and an age interval for a user.

  • Empatica E4 wristband for heart-beat and stress-level analysis [6].

SitAdapt currently implements an architecture for runtime-support of model-based interactive applications with dynamic adaptation in real-time (Fig. 2) with the adaptation applied at the final user interface (FUI) modeling-level.

All available data from the eye-tracking system, Facereader and the wristband are communicated to the SitAdapt framework over an application programming interface (API). The eye- and gaze-tracking data are synchronized in real-time with the signals for the six basic emotions, the gender and age range of the user, the wristband data (electro-dermal activity sensor, photo-plethysmography sensor) by the signal synchronization component. The situation analytics component recognizes the current situation. This component evaluates the data provided by the various sensors and the metadata and state information provided by the interactive application in order to analyze the situation at a given moment.

Depending on the respective situation and emotional state of the user, the evaluation and adaptation component makes a rule based decision with the help of the data from the situation analytics component about whether and in which way the user interface of the interactive application is dynamically adapted. To configure the adaptation, the component also has access to the domain model, i.e. the task and concept model, of the target application.

For configuring the adaptation, the evaluation component supplies a set of situation patterns, pattern subcomponents and/or templates from the PaMGIS pattern repository, which is used both for modifying the client- and the server part of the target application, depending on the adaptation requirement.

2.2 Example for SitAdapt Operation

SitAdapt is currently used mainly for testing prototypes from the e-business domain. The situation analytics component steadily evaluates the current system of the user and the target application. It can therefore quickly react with individual adaptations to problems encountered by the user during interaction with the application or to new user requirements triggered by a previous adaptation of the user interface.

A typical scenario that can be handled by SitAdapt is the following:

  1. 1.

    The situation analytics component recognizes a change in the application context: the user has finished his or her shopping tour, has put several items into the shopping cart and has entered the checkout counter of the e-shop.

  2. 2.

    The emotion level for happy is high, however a significant level of sadness is also present.

  3. 3.

    The pulse rate is higher than 15 s earlier and the gaze of the user moves steadily from the shopping cart with the selected items to the total price of the purchase indicated at the checkout counter. The mouse button is near the buy button, but the user still hesitates to press the button.

  4. 4.

    A situation pattern is activated that recognizes that the total price is above a certain level. The pattern triggers the display of a beautifully styled voucher with a text indicating that the voucher is valid only, if the user finalizes the purchase within the next ten minutes.

2.3 SitAdapt Implementation

The current SitAdapt test system and evaluation environment, was created with the ASP.NET MVC Web Application framework developed by Microsoft. The example applications implemented for this paper use the open source JavaScript framework Angular JS and can access the tools and repositories of the pattern- and model-based development framework [11].

All the necessary modifications for supporting the runtime adaptation had to be integrated into the PaMGIS framework components. For building the task models and designing the situation patterns needed for the current e-business example applications, the modeling tools and the PPSL (PaMGIS pattern specification language) could be used.

3 Conclusion and Future Work

New technologies for capturing and observing bio-physical and visual user data make it easier to determine the current and changing emotional and cognitive user status in human-technology interaction. Examples of this are visual emotion detection in the face [17, 19] as well as the evaluation of cardio, brain, muscle, and eye signals [16, 18]. Frequently, features or visual micro- and macro-expressions composed of consecutive single signals are formed, which facilitate the reliable assignment of emotional states.

These newly acquired measurement data can be used to improve information and suggestions for the dynamic adaptation of the interactive software. Currently we are integrating these new measurement data into the SitAdapt system.

In this paper we have discussed the present implementation of the SitAdapt system that uses a well-structured architecture for communicating with the PaMGIS MB-UIDE. In order to allow for real-time adaptation functionality, we had to modify several components of the framework.

The quality of the adaptation process and the rules will be enhanced over time by a deep-learning-based optimization component that evaluates the selected adaptation choices for different applications and users. This component is currently under development and will be a part of future releases of the SitAdapt architecture.