Keywords

1 Introduction

With proliferation of embedding digital technology in everyday objects, Intelligent Environments like Smart Homes include a growing number of intelligent interconnected devices. However, provided interaction possibilities are becoming increasingly invisible (known as the Invisibility Dilemma [1]), esp. with regard to the expanding utilization of natural interaction like motion-gestures [2]. Mostly, it is not obvious which gestures can be performed and what corresponding functionality is available among these devices. Smart Devices are produced by different manufacturers and are therefore highly heterogeneous in nature, protocols and functionality. In order to reduce such complexity, devices could be enabled to reflect their interaction possibilities by providing ambient manuals on available output devices [3]. Within the course of an ambient computing class at the University of Lübeck, we addressed this type of problem. Hereafter, the development process, implementation, and provision of ambient manuals on different devices based on an intelligent light-control by the use of Natural User Interfaces (NUI) will be presented.

2 Light-Control

In order to build a representative scenario of an Intelligent Environment with low complexity in functionality but heterogenous Smart Devices, three Smart Lights were interconnected to an ambient light-control system, which can be operated by two different NUIs.

2.1 Used Devices

As input devices one stationary (Leap Motion) and one wearable (Myo) device were used. The Leap Motion Controller uses two infrared (IR) cameras capturing the reflection of three IR-LEDs to recognize finger and hand motion-gestures, performed within an area up to 60 cm above the device. The Myo-bracelet is worn on the users forearm at height of the anconeus. Using electromyography (EMG) sensors and an inertial measurement unit (IMU), performed hand and finger gestures can be recognized by analyzing muscle contractions, velocity and accelerations. Both devices are able to recognize a given set of gestures ex factory. Three different light sources with slightly different functionality and heterogenous protocols were used. These include two LED-stripes, namely Adafruit NeonpixelFootnote 1 and ArtNet LED Dimmer 4Footnote 2, and one light bulb, namely Philipps HueFootnote 3. Whereas the Adafruit Neonpixel allows to control each single LED, the ArtNet LED Dimmer as well as the Philipps Hue do not. All light sources are able to be turned on and off as well as changing their color and brightness.

2.2 System Architecture

In order to interconnect all devices mentioned in Sect. 2.1 an extensible system architecture was developed (see Fig. 1). Central input component is the Gesture Provider, which is responsible for setting up the connection to NUI devices and recognizing performed gestures within per device Gesture Detectors. All gestures are transformed into a system coherent commands and are provided to other system components via publish-subscribe-pattern using device-specific Gesture Listeners, if such device is available and ready for communication. Furthermore, the communication with all light sources is handled by the Light Manager, which subscribes to available Gesture Listeners and coordinates the delivery of received commands to the current selected light source by translating the commands to the needed protocol. Upon this, the current state of the light-control and detailed information about NUI devices can be accessed via the Feedback Server, which subscribes to Gesture Listeners and observes the Light Managers’ current state. Information like selected light source, last recognized gesture, current color values, etc. can be separately retrieved by external instances like ambient manuals via HTTP in JSON format. Additional light sources and NUI devices can easily be integrated into the existing system by just developing a Gesture Detector resp. Lightconnector for setting up the connection with such devices.

Fig. 1.
figure 1

System architecture

2.3 Control Capabilities

In addition to currently available functionality of commercially available Smart Lights such as switching light on and off as well as changing color and brightness, the implementation of light-control includes a mechanism to switch between light sources. All functionality were mapped to motion gestures on both NUIs. For this purpose provided gestures by the SDKs were first used (like Wave-In/Out of Myo). In order to map additional functionality, further gestures were defined (see Table 1). Light follows Hand command is just available on Adafruit Neonpixel, because its the only lamp which LEDs can be controlled individually.

Table 1. Gesture overview

3 Ambient Manuals

As part of building a realistic Intelligent Environment, the developed light-control represents a typical Smart Home scenario with controlling interconnected devices by NUIs. None of the used devices is able to provide information about available interaction possibilities or functionality to the user. In addition, functionality and therefore interaction possibilities varies between the devices. In order to facilitate the users access to the light-control, it would be useful to provide an explanation of available interaction based on the current interconnection in form of manuals. Thus, devices obtain a reflective character by explaining interaction capabilities on available output devices themselves. For this purpose, typical output devices of Intelligent Environments were chosen, namely a Projector (Philips Picopix), a Tablet (Samsung Galaxy Tab Pro 12.1), a Smart Watch (Samsung Galaxy Gear), and a Smart Glass (Epson Moverio BT-200). Two devices have been assigned per NUI (Myo: projector and tablet, Leap Motion: Smart Watch and Smart Glass). An ambient manual explaining the light-controls’ current interaction capabilities reflectively was developed on each output device in an independent but structured process.

3.1 Participatory Design

In order to cover potential users’ needs, all manuals were developed in a participatory design process [4]. Therefore, five subjects per each device were recruited. With the purpose of giving all test persons an idea about the Invisibility Dilemmas complexity, the output devices, the NUI as well as the developed Smart Home scenario were explained to the participants. After that, all participants got the possibility to experience the NUI themselves by controlling demo applications provided from the manufacturers. Jointly with the subjects, paper prototypes of potential manuals were developed and used for further conception. As one of the most important requirements, all potential users mentioned a graphical representation of available gestures. There, representations should show a gestures motion sequence and be available as animation or video. These findings are thus consistent with [5]. In addition to systems interaction capabilities, it should be explained how to put a NUI into operation. Important information about the device’ use should appear during interaction.

3.2 Visualization of Manuals

All paper prototypes resulting from the participatory design were further developed including the participants requirements and wishes. By using the light-controls provided interfaces (see Feedback Server in Fig. 1) ambient manuals on all output devices were realized. Figure 2 illustrates all manuals. In Fig. 2a, a mobile application running on a tablet, is depicted. All position of light sources are visualized within a realistic spatial model in which the current selected light is colored depending on selected color and brightness (at this point red with medium brightness). Picograms, located around the currently selected light source, display available functionality as well as animations of corresponding gestures. As seen in Fig. 2b gestures are visualized by means of realistic animations of hand and finger movements. Upon this, information about provided functionality of each light source is available and presented by pictograms. Instructions about commissioning the NUI can also be accessed. Figure 2c displays the ambient manual projected on a wall. Beside a video presenting the performance of a gesture, a pictorial as well as a textual representation is arranged next to it. In comparison to the previous presented manuals, users are able to train how to use the light-control by passing an interactive training mode after setting up the device. Checkmarks below the pictorial representations indicate the amount of correct performed gestures. Figure 2d visualizes the graphical interfaces projected into the users field of view while wearing a Smart Glass. In the radial menus’ center the current selected light source is depicted. Around this, information about available functionality is provided. Next to the device and functionality information a looped 3D rendered animation shows the to be performed gesture within a realistic room model as the user selects it.

Fig. 2.
figure 2

Ambient Manuals on different output devices (Color figure online)

3.3 Evaluation

Each manual got evaluated targeting its workload while using the light-control. In total, 60 subjects (15 each manual) aged from 13 to 61 were asked to perform the following three tasks: (1) select a light source and switch it on, (2) change the lights color and brightness, and (3) let the light follow your hand. After each task, participants filled out a NASA-TLX-questionnaire [6] on six different sub scales with values from 0 to 100 indicating the perceived workload from low to high. Upon this, participants were asked to Think-Aloud while performing the task. All participants recognized the necessity of reflective manuals within the given scenario. By using the provided manuals all participants were able to fulfill the given tasks, whereas none of them was able to do without. A preliminary analysis of the results reflects this by the lowest overall average value of 28 by the performance sub scale. Overall, an average value 30 was measured, which indicates a low perceived workload by the users in using the light control guided by reflective interaction explanations. Furthermore, both manuals using a realistic representation of a gesture performance (Projector and Smart Glass) achieved lowest values in mental demand (avg. 29 and 30), whereas the Tablet manual achieved the lowest performance (avg. 21) and frustration (avg. 27) values. In conjunction with the participants statements, this relates to the fact, that such manual uses a spatial representation of the light-control illustrating every system reaction.

4 Conclusion

This contribution presents a system architecture for a ambient light-control representing a realistic Smart Home scenario of three interconnected lights controlled by two NUI. In order to explain available functionality and interaction possibilities four ambient manuals were developed and evaluated. Currently, we’re working on further analyzing the evaluation results and the automatic generation and delivery of ambient manuals by the current interconnection state of Smart Devices based on a structured self-description containing device information and available interaction capabilities.