1 Introduction

In recent years, humans have become more connected. In the next few years it is expected that billions of new devices, each one with the capacity to collect information, communicate and interact with the environment, to be inserted into the world. Cloud [4] and Fog [9] computing are both new technologies that aim to deal with those kind of devices.

Cloud Computing has the objective of deploying computational systems in highly distributed environments and deal with configuration of resources. Therefore, Cloud Computing is not a good choice to deal with applications that need frequent communication or real time response. For this purpose, Fog Computing was proposed. Fog Computing is implemented in the edge of the network and it provides low latency, location awareness and improves Quality of Service (QoS) for streaming and real time applications.

A major challenge is that most of those new devices will be mobile, so it will be an important issue to know how to deal with this characteristic. Integrating mobility prediction with Cloud/Fog Computing studies can be a solution to face this problem. In this work we present an overview of three research fields (Handover, Computation Offloading and Resource Management) in Cloud/Fog computing presenting state-of-the-art works and additionally we discuss how the usage of mobility prediction techniques makes sense in their context.

The remainder of this paper is organized as follows. Section 2 presents an overview of human mobility prediction and some of its applications. Section 3 discusses how mobility prediction can be applied in Cloud/Fog Environments to improve their capabilities. Finally, Sect. 4 discusses challenges and open issues in applying Human Mobility studies to the Cloud/Fog computing field and presents conclusions and future works.

2 Human Mobility Prediction

To improve urban mobility, accessibility and quality of life, it is important to understand how people travel and conduct their activities. This issue has been one of the major focus of city planners, geographers and transportation planners. Human mobility is important to characterize mobility patterns such as walking home, driving to working places or utilizing public transportation system and it could be applied in several fields as epidemic control, urban planning, traffic and forecasting systems. Urban human mobility prediction refers to the estimation of the person’s next location.

Current human mobility models can be classified into two groups: trace-based and synthetic models [18]. The trace-based models generally use GPS (Global Positioning System) traces, Bluetooth connectivity observations or Call Detail Records (CDR). A problem that we found in the use of trace-based models is that the data are collected in a specific place, as a consequence, their applicability can be limited. Synthetic models are defined on mathematical basis what makes them widely used in simulations. A drawback in this approach is that it often has limited similarity with a mobility behavior in the real world. Figure 1 presents a taxonomy of the presented models.

Fig. 1.
figure 1

Taxonomy of presented models

To define which data collection technique will be used, it is important to take in consideration what kind of application will be conducted and what is the minimum acceptable accuracy that the user expects. Many other studies started to use GPS data to track people. With the popularization of the usage of smartphones, it becomes very easy to obtain GPS data from users. An advantage of using GPS data is its accuracy in outdoor locations and also it is a low cost way to obtain data. A drawback is that if the user is using GPS embedded in a smartphone he/she tends to keep the GPS signal turned off to save battery. In the literature it is possible to find articles that use GPS signal embedded in smartphones [35], taxis [37], public transportation [26] and private vehicles [15].

It is also possible to use WiFi Scanning to predict human mobility [25]. While people are walking, their devices could automatically connect to WiFi Access Points (APs). As soon as they connect, this information becomes available in real time and the model can detect where the user is. When the user connects to another AP the system will know his/her trajectory. Nowadays, it has become very popular the usage of Call Data Records (CDR) in human mobility studies [36]. CDRs are recorded every time that a voice-call or SMS or any Internet activities occurs. Each CDR record is composed by the user id, the cell id of the handling tower and the date and time of the phone activity.

Other kinds of data used are from social networks. Existing works take advantage of check-ins or geotags posted on social networks like Twitter [11], Flickr [7] and Foursquare [1], to try to infer the user’s movement. They use the last and before last information to try to create the possible user’s route. Additionally they use information of possible Points of Interests and probabilities of most visited places to deduce the user’s route.

The most common used mobility models are Lévy Walks [29] and Radiation Model [34]. A Lévy Walk is a random walk in which the step-lengths have a probability distribution that is heavy-tailed. Intuitively, the Lévy walks consist of many short flights and occasionally long flights where a flight is defined to be a longest straight line trip of a human from one location to another without a directional change or pause.

The Radiation Model is a stochastic process that captures local mobility decisions to help to analytically derive commuting and mobility fluxes that require as input only information on the population distribution. It predicts mobility patterns according to mobility and transport patterns observed in a wide range of phenomena. Given its parameter-free nature, the model can be applied in areas where there is a lack of previous mobility measurements, significantly improving the predictive accuracy of most of the phenomena affected by mobility and transport processes. Other works in literature propose their own human mobility prediction models.

The study of human mobility creates several possibilities to apply the acquired knowledge. Several areas could take advantage of this field. It is clearly that the direct application of this kind of study is in the characterization of human trajectory. With the proposed models it is possible to determinate with high accuracy what trajectory a person or a group will take in a certain moment in a certain day.

In network research field it can help in different aspects of network operation such as handover, resource management, routing and in a better independent deployment of connectivity models [19]. It is possible to apply human mobility in a network to achieve large scale dissemination in a dynamic Device to Device based communication network [2, 14]. Human mobility prediction can also be applied in the field of Mobile Cloud Computing [16, 24]. For example, it is possible to apply the user’s next location to determine which cloud is the best one to migrate an application that the user is executing [3, 33]. Delay Tolerant Networks (DTNs) and Opportunistic Networks (OppNets) can also benefit from the usage of human mobility detection [20, 31]. This kind of networks need an inter-contact time to have an opportunity to forward message from one device to another. If the user’s next location is known the node can determine which is the best receiver based on its location.

3 Application of Mobility Prediction in Cloud/Fog Computing

This section presents and discusses some research areas of Cloud and Fog Computing and how human mobility prediction can be used in order to improve their capabilities. We list the usage of human mobility prediction technique in Handover, Computation Offloading and Resource Management in Cloud/Fog Computing. In the presented works, mobility prediction was used as an Input source for decision algorithms. With this information the algorithms could determinate how to better deal with the user’s mobility and what actions it should take to avoid resource (time, computing resources, money) wasting. Table 1 presents a summary of the presented papers and contributions.

Table 1. Summary of presented papers

3.1 Handover in Cloud/Fog Environments

One of the main challenges in mobile environments is how to deal with handover procedures. Handover, or handoff, is the process where a node maintains its connection active while moving from one point of attachment to another. The idea is that the equipment should select the best base station to connect. One simple rule to determine the best base station is the received signal strength (RSS) level, in other words, the equipment will select the base station with the strongest signal [5]. In case another base station provides a higher RSS than the current base station, the user equipment will change its association. This can occur if the user equipment moves away from the current base station, for example. Other metrics besides RSS can be used. Many studies have been conducted in order to evaluate the best time to perform a handover procedure [12].

Bao et al. [6] proposed a framework known as Follow Me Fog (FMF) to support a seamless handover timing scheme among different computation access points. The proposed framework has a mechanism to pre-migrate a job when a handover procedure is expected to happen. The FMF framework constantly monitors RSSs from different fog nodes to determine when a job needs to be migrated. When the RSS from the current fog node is decreasing while, at the same time, the RSS from a neighbor fog node is increasing, the computation jobs are migrated before the connection redirection. As a result, the service can be resumed when the mobile device is redirected to the new fog node. The authors developed a prototype and their evaluation demonstrated that FMF can achieve a latency reduction of 36.5% when a mobile device is handed over from one fog node to another.

Chen and Tsai [10] presented a new mobility management mechanism using an integrated strategy of Follow Me Cloud (FMC) and Follow Me Edge (FME) called Follow Me Cloud-Cloudlet (FMCL) for smart cities. The FMCL approach aims to reduce the total transmission time if some data packets are pre-scheduled and pre-stored into the cache of a cloudlet when an user is switching from the previous Fog-RAN (Radio Access Network) to the serving Fog-RAN. FMCL is evaluated through simulation in terms of the total transmission time, the throughput, the probability of packet loss, and the number of control messages. The proposed FMCL approach outperforms existing FMC results in terms of the total transmission time, the average throughput, and the probability of packet loss, but with higher overhead due to the amount of control messages.

Zhang et al. [38] presented an architecture for Fog RAN (FRAN) and then proposed a handover management mechanism using edge caching in FRAN. Authors argued that conventional handover schemes are mainly based on RSS, where handover decisions are made comparing it to a predefined threshold. Their proposed mechanism considers APs as a resource for mobile devices making the handover process problem a resource allocation problem. Using simulation the authors concluded that the proposed FRAN architecture in conjunction with the mobility management scheme can significantly decrease the signaling overhead of handover compared to conventional RANs.

Handover management brings several challenges in Cloud/Fog scenarios. For instance, the higher the mobility of an user, the higher his/her handover rate will be. Other problem must occur is if a node stays attached for a short period of time, as there might not be sufficient time for the system to complete the handover procedure. Human mobility prediction could be used to improve the performance of handover procedures. If the user’s next location is known, the handover procedure can be performed in an optimal way. In a scenario where there is a dense network with several fog nodes, as expected in 5G networks, when the node starts to move and his/her next location is known by the network, the closest fog node can be responsible to accept his/her handover process and this way optimize this procedure. Other possible usage of mobility prediction is when the node that is moving has packets to receive. Instead of waiting to send those packets to the node, the sender can send them directly to the destination fog node to be cached until the node complete his/her handover procedure.

3.2 Computation Offloading in Cloud/Fog Environments

Computation offloading is a process where tasks that demand a large amount of resources can be executed over a Cloud/Fog infrastructure in order to overcome the resource limitation problem of mobile devices and to try to reduce the total execution time. Due their limited and non-scalable processing power, mobile devices take longer time to execute intensive computations when compared to the same computations executed over the cloud. Transmission time for offloading computations and retrieving the results is an important factor which determines whether the offloading process will be beneficial [8]. Besides the advantage of the increase in the computational power when users offload their task, they also benefit from the decreasing in their energy consumption.

Farris et al. [13] formulated the proactive migration problem at the network edge. They applied prediction schemes of user mobility patterns to improve their results. The authors defined two integer linear optimization problems aiming at the one hand to minimize Quality of Experience (QoE) degradation due to service migration and on the other hand, minimize the cost of proactive replication. The proposed algorithms were evaluated in terms of probability of user reactive migration and average number of replicas per user.

Lee and Shin [22] developed an offloading decision-making technique based on a mobility model of each individual user known as Mob-Aware. This mobility model takes advantage of the regularity of user’s mobility pattern and it is characterized by a sequence of networks to which users are connected. The Mob-Aware decision maker gathers previous user movements and network changes corresponding to the movements, builds a mobility model with gathered data, and then makes offloading decisions. The authors evaluated their technique using a trace-based simulation with real log data traces from 14 Android users. The results showed that their technique, when users are highly mobile, can increase the performance of mobile devices in terms of response time and energy consumption.

Li et al. [23] presented a mobility prediction based offloading heuristic to mobile device clouds. As nodes are usually connected via wireless technology and can change their locations from time to time, the connections between devices are usually unstable and the applications offloaded may fail. The authors proposal has the objective of guaranteeing that users are able to continue the applications offloaded seamlessly regardless of the mobility of the nodes. Based on the simulation results, it was shown that in a Mobile Cloud Computing environment, due to the mobility of nodes, it is very difficult to build a robust and effective environment for client nodes to offload computation-intensive applications. With the help of mobility prediction, the proposed heuristic can complete the applications offloaded as soon as possible and with the least risk of failure.

Shi et al. [32] proposed a cloudlet service model and formulated the service scheduling problem in cloudlets aiming to find the optimal service running sequence which minimizes the average service response time during the whole running process of the service for a user. The authors presented an algorithm known as Mobility Prediction-based Markov Decision Process (MPMDP) that takes user’s mobility prediction into account and uses Markov Decision Process to make a decision on which cloudlet the services should run. Their proposal was evaluated by simulation using real world traces. The results showed that MPMDP achieves a lower average response time compared with previous schemes.

Having information about user mobility is essential to optimize computation offloading. If a node is moving and it needs to offload some task, it will be better if it offloads its task to a Cloud/Fog closer to its final destination. In this way, the node will be able to receive its results only when it finishes moving, avoiding unnecessary communication in the network. Other possible approach is, instead of full offloading a task to a cloud/fog close to its final destination, the node could partially offload the task, in other words, it could divide the task in some parts and then offload those parts during its trajectory. Those parts could be send to clouds/fogs that are in the node’s way. Another issue in computation offloading is that users must know for how long they will stay at a determinate place. With this information they can decide if it worth to offload their task at that moment. If they realize that they will not stay long enough to finish the offload process, the user can decide to wait until he/she arrived in an area that he/she will stay long enough to finish the process. Mobility prediction can also be used to recreate routes in a effective way, since a user can be in a different place than the one that he/she initially the offloading process.

3.3 Resource Management in Cloud/Fog Environments

Resource management is one of the main research fields in Cloud/Fog Computing. Providing resources at the edge of networks (closer to end-devices) brings several benefits such as low latency. Resource management brings two different perspectives: where is the best place to allocate the resources, and when and how much resource it is necessary to allocate. The main ideas behind resource management is that it has to meet the agreed QoS constraints and minimize the resource waste. To achieve this, placement and scheduler strategies can play a major role by keeping log of the status of available resources.

Gao et al. [17] studied the resource allocation problem for cloud-based cache-enabled small cell networks. In the proposed model, the contents that users request are stored both at the cloud pool and at the cache storage of each small base station. Additionally the cloud pool can predict the users’ mobility patterns and determine the resource allocation scheme in a period of time. The authors formulated the problem using Game Theory. To solve this problem, they proposed a machine learning based resource allocation method. Simulation results show that the proposed algorithm achieves up to 58.2% and 26.1% gains, respectively, in terms of network throughput compared to random and the nearest algorithms.

Karimzadeh et al. [21] proposed an architecture called MOBaaS (Mobility and Bandwidth Availability Prediction as a Service). MOBaaS is composed by two algorithms that have the objective of predict user(s) mobility and network link bandwidth availability. The information provided by MOBaaS can be used in order to generate triggers for on-demand deploying, provisioning, disposing of virtualized network components and also for self-adaptation procedures and optimal network function configuration during run-time operation. Authors implemented MOBaaS on the OpenStack platform and their results confirmed the feasibility and the effectiveness of the prediction algorithms and the proposed architecture.

Mustafa et al. [27] introduced a solution to reduce the effect of resources mobility on the performance of vehicular cloud, using an efficient resource management scheme based on vehicles mobility prediction. Their mobility prediction model is based on an Artificial Neural Network that enables the vehicular cloud to take pre-planned procedures. The main objective is to reduce the negative impact of sudden changes in vehicles locations on vehicular cloud performance. Simulation results show that the proposed approach has leveraged the performance of vehicular cloud effectively without overusing available vehicular cloud resources when compared to other resources management approaches introduced in the literature.

Ojima and Fujii [28] proposed a resource management for Mobile Edge Computing using user mobility prediction. User mobility prediction is executed using linear Kalman filter for estimation of the connectivity. With mobility prediction, users can select the more stable Edge Server during task request and task collection decreasing the failing rate that implies users to proceed the tasks again. Simulation results have shown that this process has improved the success rate.

Plachy et al. [30] presented an algorithm to enable flexible selection of communication path together with a dynamic Virtual Machine placement. The authors use mobility prediction for dynamic VM (Virtual Machine) placement and to find the most suitable communication path according to expected users’ movement. The authors compared their approach to state of the art ones. The proposed algorithm leads to reduction of the task offloading delay between 10% and 66% while energy consumed by user’s equipment is kept at similar level.

Adding knowledge about user mobility can optimize the resource allocation and placement in a Cloud/Fog infrastructure. As the location of data centers is crucial to optimize resource utilization and to improve performance of services, QoE can be enhanced in terms of content access latency, by placing user content at locations where they will be present in the future. This approach can be used, for example, when some kind of event is occurring and it will be necessary for the infrastructure to reserve resources to serve every node. In this kind of applications fogs or clouds could be able to reserve an amount of their resources for a specific node or for a group of nodes that are going to a place near the cloud/fog. Thinking in a cloudlet infrastructure, for example, if users are far away from cloudlets due to their mobility, it can lead to a poor network connectivity, consequently, their user experience will be poor. The main idea is that services and applications allocate the resources according to user’s future locations, this way the resources will usually be close from the users.

4 Final Considerations

This work presented an overview of Handover, Computation Offloading and Resource Management research fields in Cloud/Fog Computing presenting some state-of-the-art works. Additionally we discussed how mobility prediction techniques could be used to improve Cloud/Fog Computing capabilities.

Cloud and Fog Computing already have interesting results in the literature and, as presented in the previous sections, researchers are already dealing with mobility issues in their works. As still exists open issues, mobility prediction problem will continue to be a hot research topic. Having mobile devices and mobile resources creates lots of challenges such as how to handle the unreliable connectivity with those resources, how to provide seamless handovers, which model better fits to predict node’s next place. Having mobile resources introduce another level of complexity in resource management algorithms.

The combination of mobility prediction with Cloud/Fog Computing brings many advantages as showed before, but it still has some open issues. It is necessary to create strategies to deal with mobility prediction failure, in other words, how the system will behave if the user’s next location is wrong. Another point is understanding users’ behavior and mobility patterns to better planning application scheduling, in other words, it is important to know the perfect timing to instantiate or migrate resources to decrease the waiting time.

Besides the usage of user mobility prediction, it should be of great value the usage of virtualization technologies as Network Function Virtualization (NFV) and Software Defined Network (SDN) to deal with problems in cloud and fog environments. Both technologies can help in the virtualization process and can also improve the performance of the system. As future works we aim to characterize this interaction and further analyze how mobility prediction could be more helpful in this new scenario.