Keywords

1 Introduction

In mobile computing environments, there are growths on applications, which provide appropriate services to users by utilizing sensing data (physiological signal, profiles, and physical phenomena). Besides the existing manual service delivery, recent mobile applications provide a personalized service and selective information to users. In particular, a smartphone embeds various sensors such as acceleration, gyroscope, GPS, microphone, proximity, illuminance, NFC. Since a user always has a smartphone, the smartphone is a useful device that recognizes the user’s location as well as the moving speed, direction and basic behavior.

In order to provide personalized services to mobile users, it is necessary to interpret the users’ activities. Various types of sensors, which are embedded in smartphones, are an important factor as the input information to identify the user’s activities. Activity recognition provides personalized services to a user by fusing and reasoning information gathered from the sensors. A context-aware technique, which makes a decision for current situation with a user’s activity, can provide appropriate services to the user. The context-aware technique describes meaningful contexts about given situation and then integrates the contexts. Especially, an embedded middleware, which recognizes a user’s activity in mobile computing environments, has high industrial impact as a mobile app, connected cars, smart home, simulation, education, and games.

As spreading the concept of ubiquitous computing, activity recognition research is being actively investigated in many interdisciplinary fields [1]. In human-computer interaction field, technologies for an intelligent system have been studied. The technologies include indirect information (for example, user’s gender, age, location, etc.) as well as a direct input of a user [2]. Especially, applications using context information in mobile computing environments have been received lots of attention. The developed context-aware services exploit the information obtained from various sensors in mobile devices. For example, there are a mobile tour guide services which recommend appropriate paths to users according to users’ location and orientation information, and a mobile augmented reality service which augments appropriate contents on objects of interest by using acceleration, orientation, and other sensor information.

It is difficult to clearly identify a user’s activity or intentions, because the existing activity recognition research for mobile application systems integrates a wide variety of sensing information in a single way. Also, the existing decision-making technologies have been developed for the special purpose in a given domain such as smart home, smart office, and smart building. Furthermore, there is no consideration for a generalized algorithm to infer the high level decision by collecting a large number of data (context) obtained from various sensors [314]. And to conclude, it is not enough for the decision-making approach to fuse the different types of contexts obtained from different sensors in order to extract the semantics exactly. Thus, we need to design a middleware that adaptively integrates various contexts and makes user’s activities according to each user’s environment.

In order to deduce the high-level semantic information from internal • external sensors in a smartphone, an embedded middleware should classify inputted data according to each characteristic and integrate them. For the operation of the embedded middleware, the generalized architecture of a context-aware embedded middleware should be developed in mobile computing environments. Therefore, we propose a context-aware embedded middleware that infers a user’s activity according to a given context by adopting a user-centric context integration technique. The proposed middleware recognizes the user’s activity by integrating the sensing information from internal and external sensors of the smartphone in real time.

2 Activity Recognition and Context Awareness

In this paper, we define a systematic approach, which is listed by key technologies and effectively links with them to develop context fusion, activity detection, and reasoning. By utilizing recently released smartphones, we design and implement an algorithm that recognizes, fuses, and reasons about activity contexts obtained from mobile and environmental sensors in a mobile computing environment. In addition, we make a lightweight version of context fusion, activity detection, and reasoning module to effectively use limited resources in a mobile computing environment.

Context fusion, activity detection, and reasoning are key properties of the context integration process. Context fusion is processed by specific fusion methods, activity detection takes contexts as well as user activities into account and reasoning is done by rule-based or statistical reasoning methods. The activity context integration comprises the process of the analysis, fusion, and reasoning. Our ultimate goal is to design and develop an approach of context fusion, activity detection, and reasoning to interact with situation-aware mobile computing systems. The proposed middleware consists of activity context integration and context reasoning. Activity context integration entails classifying and integrating inputted contexts according to each user in smart environments.

Many of the existing research activities have been mainly addressed context- and activity-awareness in their desktop environments [59]. In order to obtain high-level semantic information in real-time on mobile devices, however, a context-aware middleware, which collects and integrates various contexts is required. In particular, it is important to develop architecture for activity context integration that enables a decision to be made based on context data from large numbers of heterogeneous sensors (e.g., mobile & environmental sensors). In addition, to produce meaningful information in a smartphone, activity context integration should be done according to the characteristics of the activity context. Therefore, in this paper we propose a context-aware embedded middleware that generates high-level integrated contexts through the fusion of internal • external sensors in a smartphone. The proposed system extracts semantic information such as the user’s attention, orientation, activities, the object of interest and the surrounding environment by fusing and reasoning about various contexts obtained from heterogeneous sensors in a mobile computing environment. As available sensors in a mobile computing environment, there are internal sensors (e.g., accelerometer, gyroscope, ambient light sensor, digital compass, internal GPS, and internal Wi-Fi) and external sensors (e.g., input device, external light sensor, temperature sensor, and noise sensor). The proposed middleware generates a reliable integrated context through sensor fusion in smartphones by filtering unnecessary context information. In addition, the proposed middleware creates a robust integrated context according to the characteristics of information obtained from internal or external sensors of the smartphone.

3 Activity Context Integration

3.1 Activity Recognition Approach

The proposed middleware is operated as shown in Fig. 1. First, the proposed middleware extracts features like sinusoidal peak from raw data from smartphone internal sensors and external sensors by using signal processing and feature extraction. Next it operates an application by creating appropriate activities after activity context integration based on the 5W1H contexts.

Fig. 1.
figure 1

Process of activity context integration

Table 1 describes the analysis of possible algorithms for context fusion and reasoning. A user activity, which can be obtained by applying the algorithm, is related to the basic daily behavior. The user activity comprises a primary activity for a simple action, and a secondary activity which can be determined by reinforcing the primary activity and surroundings information. For example, the primary activity is walking, running, sitting, hiking, jogging, and stair climbing up/down, and the second activity is to cook, to eat, to move hand, watching TV, and talk to. The following Fig. 2 represents a structure of the context-aware embedded middleware. The proposed middleware classifies the 5W1H (Who, What, Where, When, Why, and How) context of various sensors according to each user. The middleware recognizes user activities by applying reasoning algorithms for the characteristics of six elements of the 5W1H context. The process of activity recognition makes a proper decision by utilizing learned information in knowledge base.

Table 1. Analysis of reasoning algorithms
Fig. 2.
figure 2

Structure of the context-aware embedded middleware

3.2 Activity Recognition Implementation

The proposed middleware supports a recognition algorithm that integrates a user’s location, interaction, and internal/external sensors in a smartphone. For the temporal consistency, we applied the HMM (Hidden Markov Model) in the activity recognition process. The activity recognition process based on HMM is as a following 4 steps. At an initial step, our hypothesis is that a user is using a smartphone. Figure 3 illustrates the process of usages of various sensors inside the smartphone. Our middleware operates to validate the user’s activity by utilizing the above sensors, as shown in Fig. 1.

Fig. 3.
figure 3

Used sensory data in a smartphone

  • Initial Step: Since an audio sensor for measuring the ambient noise is difficult to process the input values, we apply a FFT (Fast Fourier Transform) to convert the magnitude in dB level (as shown in Fig. 4). We correct an acceleration sensor, a gyroscope sensor, and device motion (smartphone’s motion) by applying a Lowpass filter because the sensors are sensitive to changes of their input values. Figure 4 shows the result of experiments by explicitly changing values of the x-axis (y-axis and z-axis are fixed).

    Fig. 4.
    figure 4

    Sensor signal processing (left: raw input, right: filtered output)

  • Step 1: Next, we compensate the posture and position of the smartphone using the device orientation, digital compass, user input for the correct utilization of sensor data. (as shown in Fig. 5)

    Fig. 5.
    figure 5

    Implemented activity context integration method

  • Step 2: As shown in Fig. 5, we develop the quantization procedure for switching a continuous value to a discrete value in order to solve a frequent change of the gyroscope and acceleration data, and apply them to the HMM. The acceleration sensor has an advantage to recognize the user’s action with minimizing the loss of data, because it can detect the change and vibration or impact of the speed per unit time. In particular, if we utilize a gyroscope with an accelerometer we can easily determine the user’s activity by using six-axis information. If the shock caused by the user’s motion is transmitted to the smartphone, we can extract a specific pattern of the acceleration and gyroscope data. We then make the inference of 1st activity (standing, sitting, walking, and running) by using the state transition of the HMM. Figure 6 represents the state transition diagram of the HMM. An initial state of the state transition diagram begins with the Standing state, which is the most frequent activity in probability distribution.

    Fig. 6.
    figure 6

    State transition diagram

  • Step 3: In next step, we reason out the 2nd activity by using the user’s location information, user input information, and the 1st activity inference result. The reasoning method is to apply both rule based reasoning and Naïve Bayes classifier, as shown in Fig. 7.

    Fig. 7.
    figure 7

    The secondary activity recognition process

  • Step 4: Finally, we develop an embedded middleware application that recognizes the user’s activity in mobile computing environments.

4 Conclusion

In this paper we proposed an approach of activity context integration as a means of evaluating semantic information, by integrating situational information from heterogeneous sensors in a smartphone. The proposed activity context integration is a method to provide a foundation for interacting with situation-aware mobile computing systems. The proposed approach evaluates, assesses, adopts and extends multiple fusion methods to find solution for robust activity context integration. It incorporates real-time fusion of contexts, which are obtained from multiple sensors, and facilitates real-time maintenance by information fusion for online dynamic changes of sensors in mobile computing environments.