Smart Grid Management Using Cloud and Fog Computing

Conference paper
Part of the Lecture Notes on Data Engineering and Communications Technologies book series (LNDECT, volume 22)


Cloud computing provides Internet-based services to its consumer. Multiple requests on cloud server simultaneously cause processing latency. Fog computing act as an intermediary layer between Cloud Data Centers (CDC) and end users, to minimize the load and boost the overall performance of CDC. For efficient electricity management in smart cities, Smart Grids (SGs) are used to fulfill the electricity demand. In this paper, a proposed system designed to minimize energy wastage and distribute the surplus energy among energy deficient SGs. A three-layered cloud and fog based architecture described for efficient and fast communication between SG’s and electricity consumers. To manage the SG’s requests, fog computing introduced to reduce the processing time and response time of CDC. For efficient scheduling of SG’s requests, proposed system compare three different load balancing algorithms: Round Robin (RR), Active Monitoring Virtual Machine (AMVM) and Throttled for SGs electricity requests scheduling on fog servers. Dynamic service broker policy is used to decide that which request should be routed on fog server. For evaluation of the proposed system, results performed in cloud analyst, which shows that AMVM and Throttled outperform RR by varying virtual machine placement cost at fog servers.


SG Management Load Balancing Algorithm Service Broker Policy Election Petitions Cloud Server 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Cloud computing provides services to its users through web-based tools and applications. Cloud service provider like Google Cloud Platform, Amazon Web Service, Microsoft Azure and IBM Cloud etc. maintain their own CDC. According to [1] every data center has some physical hardware CPU, memory, storage disc and bandwidth to store and process data called Physical Machine (PM). CDC logically divides one PM into multiple machines, called Virtual Machines (VM). Some resources of PM assigned to every VM that is responsible to handle the user requests. On CDC Resource assignment perform in two ways static or dynamic scheduling. In case of static VM placement, PM divides into equal size of VMs before request generated. While in dynamic placement PM divides into VMs at runtime, when the user generates request system detect the strength of request and assign VM according to the need of that request. Information from cloud only accessible using internet services. The popularity of cloud computing is increasing gradually because of its services and freedom of access to its associated users. Cloud described in three models shown in figure Fig. 1 on the base of its services:
Fig. 1.

Architecture of Cloud Data Center

  • Software as a Service (SaaS)

  • Platform as a Service (PaaS)

  • Infrastructure as a Service (IaaS)

The first model of the cloud SaaS provides software like services (Email, virtual desktop, gaming, communication) to its clients and they can use licensed software via SaaS. In PaaS cloud provides the runtime environment on the internet. IaaS provides a virtual interface that provides a pre-configured interface to the consumers.
Fig. 2.

Cloud services

In cloud computing, scalability is a big challenge because connected devices and cloud response time are directly proportional according to [2]. In delay, sensitive applications system need fast communication. The concept of fog computing is recommended to minimize the intensity of such challenges. The fog computing provides same services as cloud computing and acts as an intermediary between cloud servers and consumer to reduce the overall load of cloud servers. The concept of fog computing helps to minimize latency, enhance the flexibility and reliability of the network. In this proposed system, fog computing services such as networking, storage and computation used in SG for efficient energy management of smart cities. Fog computing provides delay sensitive communication between consumer and servers. Information Communication and Technology (ICT) convert Traditional Grid (TG) into SG for two-way communication. SG connected with other SGs and electricity consumers to fulfill the energy requirements. SGs share information with fog servers, fog servers maintain the data of SGs and predict the future load on SGs. Fog servers have information about surplus and deficient SGs of a region. Fog helps to arrange energy for SGs in case of deficiency via cloud and utility. If a whole region becomes energy deficient than fog communicates with cloud and cloud assign utility company to that region to fulfill the energy demand. In this paper, the proposed system covers a large residential and commercial area of six regions. Every region has its own fog servers to manage the requests of SGs. Every building and Power Supply Stations (PSS) has its own SG to fulfill the electricity demand. Every SG communicate with other SGs to make a coordination system for electricity sharing. In case of energy deficiency SG requests to its corresponding fog to fulfill the electricity demand. Fog manages the request and coordinates with other SGs, and if fog found surplus energy in any other SG then assign that surplus energy to deficient energy SG. If there is no surplus energy in all SGs of a region then fog requests to cloud, and cloud server assigns appropriate utility company to deficient SG. To balance the load on fog proposed system use three load balancing algorithms RR, Throttled and AMVM load balancing algorithm (Fig. 2).

1.1 Motivation

Fog computing provides delay sensitive communication between consumer and servers. The demand for electricity is increasing gradually. To minimize the electricity wastage and satisfy the user requirements in the efficient and effective manner we need to schedule its resources and consumption. Random electricity demand by the consumers causes the problem to fulfill the energy requirements. Authors in [3, 4, 5] explore the idea of energy scheduling by integrating TG with ICT for two-way communication. For fast, flexible and reliable communication among different consumers, the system requires cloud computing. Authors in [3, 5] proposed the idea of cloud computing with the intermediary layer of fog computing to schedule energy demand. Cloud and fog computing help in runtime decision making [6]. The authors in [6], specified that the system needs to predict the demand for energy to minimize the wastage. In [6, 7] they proposed load predictor systems using SG. SG use to communicate with other SGs and developed a coordination network between different electricity consumer to guess overall load. Fog computing uses to minimize the load and processing time of cloud. Fog act as an intermediary layer between consumer and cloud servers consist of VMs to perform in-network processing. To make the fog efficient and fast system need load balancing algorithms and service broker policies to distribute the load among different VMs of fog. Authors in [4, 5] designed cloud environment for limited users of different regions to check the performance of their proposed load balancing algorithms. Authors emphasis on the optimization of the overall response time of fog servers for residential buildings by overlooking the overall system costs. Proposed system motivated to optimize overall response time and communication cost of fog computing during scheduling of electricity requests received from SGs.

1.2 Contribution

In this paper, the proposed system minimizes the response and processing time of fog computing and reduces the overall system cost.

2 Related Work

For fast and delay sensitive communication, cloud load management is a challenging field for researchers. To minimize the processing time of cloud, authors are working on different mechanisms. They introduce an intermediary layer of fog computing. Fog computing helps to achieve fast response time for the consumer as compared to the cloud. Fog provides the runtime decision-making capabilities with the help of its computing devices, that is used to distribute the overall load of fog. Many algorithms are proposed to balance the load on cloud and fog computing and help to reduce the VMs cost and fog processing time. Most commonly used algorithms are Round Robin (RR), throttled, honeybee foraging and biased random sampling etc. These algorithms are used to schedule the user requests on fog with minimum latency. The authors in [8], described a cloud load balancing algorithm and compare its results with RR, Throttled and honeybee foraging. Their proposed system performs better and balances the load successfully in a multiuser environment.

The researchers in [9], designed a honeybee foraging inspired system that minimizes the overall system processing and waiting time of a request in the queue to assign VM. Their proposed system performs better than adoptive, adaptive-dynamic and heat diffusion based dynamic load balancing algorithms. The authors in [10], proposed Ant Colony optimization (ACO) to overcome the problem of resource allocation in fog. And compare ACO’s energy consumption, processing time and standard deviation with RR. They claim that their ACO consume minimum energy with minimum processing time as compare to RR.

Another system that allocates the task efficiently to balance the overall load discussed in [11]. Proposed model allocates low power tasks and improves the system throughput by reducing system delay. In [12], authors compare Cloud-Based Demand Response (CBDR) with Distributed Demand Response (DDR) and show that CBDR performs better than DDR with respect to reliability and scalability of the communication network. DDR is channel dependent and within few iterations returns optimal solution. DDR is unreliable due to information loss. While CBDR is channel independent and cost-efficient. The authors in [5], compare six load balancing algorithms that are RR, throttled, Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Ant Colony Optimization (ACO) and sixth is a hybrid of ABC and ACO algorithm called Hydride Artificial Bee and Ant Colony Optimization (HABACO). Their proposed HABACO outperform the other five discussed algorithms.

3 Problem Formulation

The objective of the proposed system is to schedule the SG electricity requests by minimizing the overall processing time and VM placement cost. Suppose cloud data center has ’N’ number of PMs,
$$\begin{aligned} PM=\{ pm_{1},pm_{2},pm_{3}, . . . ,pm_{N}\} \end{aligned}$$
and every PM can have multiple VMs,
$$\begin{aligned} VM=\{ vm_{1},vm_{2},vm_{3}, . . . ,vm_{j}\} \end{aligned}$$
Every VM has some resources such as CPU, RAM, network BW, storage At fog computing layer, system has ‘N’ number of fog nodes to manage consumer requests
$$\begin{aligned} Fog=\{ f_{1},f_{2},f_{3}, . . . ,f_{n} \} \end{aligned}$$
A fog node has maximum electricity requests from SGs
$$\begin{aligned} Req_{Total}=\sum _{i=1}^{n}(R_{i}) \end{aligned}$$
Load balancer balance VM load by distributing requests equally among all the VMs. When fog servers receive user requests, servers maintain a queue and put that request into a queue and when finding free VM then assign the request to that free VM. Processing time can be calculated as
$$\begin{aligned} Processing_{Time}= \frac{Number \ of \ Requests}{Capacity \ of \ VM} \end{aligned}$$
Total processing time can be estimated as
$$\begin{aligned} Total_{PT}=\sum _{X=1}^{n} \sum _{Y=1}^{m}((Processing-Time)*A) \end{aligned}$$
where A is the arrival time of a specific request, X is the user request assignmnet and K is the VM assignmnet Response time: The RT is a time taken by the fog to receive and response the request.
$$\begin{aligned} RT=Delay_{Time}+Finishing_{Time}-Arrival_{Time} \end{aligned}$$
Objective of proposed system is to minimize total processing by minimizing overall response time of fog servers.

4 Proposed System

The proposed system is designed to fulfill electricity demand of consumers using cloud and fog computing efficiently in term of cost and processing time. This section, describes the architecture of our proposed system, where we use the three-layered architecture that consists of cloud, fog and consumer layers. These three layers are interconnected to share information with each other. The fog computing act as an intermediary layer between cloud servers and consumers. Proposed system deals with two types of electricity demand, electric vehicles energy demands consider as commercial consumption (Power supply stations) and household demand as residential consumption (Buildings, homes and apartments). Every building and power supply station has its own SG to generate and fulfil the energy demand of its connected users. SGs communicate with fog server to share their information about energy generation and demand. For an energy deficient SG, fog server will predict the energy demand and provide energy, from energy surplus (energy generation greater than energy demand) SG. If all the SGs become energy deficient then fog server sends their information to cloud server for utility assignment. Utility company is the electricity provider at large scale.

For simulation, proposed system considers six regions and every region managed by two fog servers to schedule SGs requests. There are hundred SGs in every region and every SG has a maximum of thousand homes. Every home requests to SG for electricity, request sends to fog servers then fog assign electricity to that specific home from SG by maintaining the information of generation and consumption. Every region has an electric vehicle Power Supply Station (PSS) that can fulfill the maximum one thousand vehicles charging or discharging requests in one hour. In this proposed system cloud layer has full information about demand and generation of all the smart grids of six different regions. Electricity supplier (utility) connected to the cloud and SGs. Cloud server has the full information about electricity generation of utility and connected with fog servers of every region. Every fog server has the information about energy consumption in its corresponding region and takes runtime decision to fulfill the energy demand of consumers. Fog servers communicate with the cloud server to assign utility company that can fulfill the request of an electricity deficient SGs. The proposed energy optimization system designed to minimize the cost and latency of fog servers. Latency minimized by reducing the overall response between fog and SGs. And fog server processing cost minimized by efficiently reducing the number of virtual machine and number of fog servers used in the system. Processing speed and cost are inversely proportional. In this research, we designed an optimized system with the minimum trade-off of cost and response time. The proposed system considers two situations of fog communication. In the first scenario proposed scheme uses two fogs in every region to communicate with the cloud, the overall system uses twelve fogs in six regions. Every fog has multiple VMs. Each VM has some hardware resources like processors, temporary storage and software resources like operating system. To minimize fog processing time, we need to assign virtual machine more efficiently. In literature authors [3, 4, 5] discuss some load balancing algorithms that are RR, Throttled and Particle Swarm Optimization (PSO) with multiple service broker policies. This proposed system compares RR, Throttled and AMVM load balancing algorithm with new dynamic service broker policy using one and two fogs in each region. Detail of the proposed load balancing algorithms described below.
Fig. 3.

Proposed system model

4.1 RR Load Balancing Algorithm

RR load balancing algorithm maintains a table of the request with respect to time of arrival. It has a scheduler that maintains request with respect to time. RR algorithm allocating multiple requests coming from the users and assign VMs one by one (Fig. 3).

4.2 AMVM Load Balancing Algorithm

AMVM load balancer works just like throttled. Users requests are queued by the data center controller as they are received. First, the availability of the VM is checked and if the VM is available the request is removed from the beginning of the queue and allocated to VM. The status of the VM is changed from available to busy. After the execution of the request, the status of the changed back from busy to available.

4.3 Throttled Load Balancing Algorithm

The user request received by the data center controller and then VM allocation performed. The Load Balancer find out the available VM in the index table that maintained by the load balancer and allocates VM respectively.
Fig. 4.

Flow diagram

5 Simulations and Results

In this paper, simulations of the proposed system performed in cloud analyst tool. Proposed system schedule SGs requests at fog using load balancing algorithms that are RR, Throttled and AMVM. These algorithms are used for load balancing at fog servers. Results simulated on the base of electricity requests generated by the consumers in 24 h. Simulation results performed using six regions with one fog server and two fog servers in each region shown in Fig. 4.

5.1 Twelve Fog Servers

In this scenario, two fog servers are considered for each region in the experimental setup. For residential consumption, every region has two hundred buildings and every building has its own SG to fulfill the electricity demand of maximum one thousand apartments at a time. Every region has one SG for PSS, every PSS has maximum 100 electric vehicles capacity for charging and discharging at a time. There is twelve fog serves considered in this scenario. The proposed system optimizes SG request scheduling using fog computing with respect to time and cost. Results of this scenario shown in Fig. 5 show the average response time of six regions using twelve fog servers every region has one PSS and two clusters of SGs. AMVM and Throttled load balancing algorithm outperform RR using two fog servers in each region. Figure 6 shows that average response and processing time of AMVM outperform RR using twelve fog servers.
Fig. 5.

Average response time by region (12-fogs)

Fig. 6.

Overall response and processing time (12-fogs)

Fig. 7.

System total cost

5.2 Six Fog Servers

In this scenario, we try to minimize fog servers cost by minimizing the number of fog servers. Only one fog server used in each region for electricity requests scheduling. The numbers of requests and other system evaluation parameters are same as in the previous scenario. The proposed system optimizes fog and cloud computing with respect to time and cost. Overall system’s cost using both scenarios shown in Fig. 7. Comparison of response and processing time using twelve and six fog servers is shown in Fig. 8 that shows that response time and VM placement cost of fog servers are inversely proportional and there is a tradeoff between fog placement cost and processing time.
Fig. 8.

Response and processing time comparison

6 Conclusion

In this paper proposed system represents a model of SG management using cloud and fog infrastructure. The comparison results of three load balancing algorithms perform. Results show that for efficient processing in fog computing AMVM and throttled algorithm perform batter with limited fog resources. Throttled and AMVM algorithms support system scalability with minimum response and processing time. Throttled and AMVM performs better than RR in both scenarios by varying number of fog servers. Various load balancing algorithms and broker policies are compared to improve and reduce the response time and delay respectively, of the fog servers. The response time and VM placement cost of fog servers are inversely proportional and there is a tradeoff between fog placement cost and processing time. The future work includes such intelligent load balancing algorithms that warn fog ahead of time about the predicted user requests for the resources from different regions of the world.


  1. 1.
    Sotiriadis, S., Bessis, N., Buyya, R.: Self managed virtual machine scheduling in Cloud systems. Inf. Sci. 433, 381–400 (2017)Google Scholar
  2. 2.
    Osanaiye, O., Chen, S., Yan, Z., Lu, R., Choo, K.K.R., Dlodlo, M.: From cloud to fog computing: a review and a conceptual live VM migration framework. IEEE Access 5, 8284–8300 (2017)CrossRefGoogle Scholar
  3. 3.
    Fatima, I., Javaid, N., Iqbal, M.N., Shafi, I., Anjum, A., Memon, U.: Integration of cloud and fog based environment for effective resource distribution in smart buildings. In: 14th IEEE International Wireless Communications and Mobile Computing Conference (IWCMC-2018) (2018)Google Scholar
  4. 4.
    Javaid, S., Javaid, N., Tayyaba, S., Sattar, N.A., Ruqia, B., Zahid, M.: resource allocation using Fog-2-Cloud based environment for smart buildings. In: 14th IEEE International Wireless Communications and Mobile Computing Conference (IWCMC-2018) (2018)Google Scholar
  5. 5.
    Zahoor, S., Javaid, N., Khan, A., Muhammad, F.J., Zahid, M., Guizani, M.: A Cloud-Fog-Based smart grid model for efficient resource utilization. In: 14th IEEE International Wireless Communications and Mobile Computing Conference (IWCMC-2018) (2018)Google Scholar
  6. 6.
    Khalid, A., Javaid, N., Mateen, A., Ilahi, M.: Smart homes coalition based on game theory. In 32-nd IEEE International Conference on Advanced Information Networking and Applications (AINA-2018) (2018)Google Scholar
  7. 7.
    Collotta, M., Pau, G.: An innovative approach for forecasting of energy requirements to improve a smart home management system based on BLE. IEEE Trans. Green Commun. Netw. 1(1), 112–120 (2017)CrossRefGoogle Scholar
  8. 8.
    Chen, S.L., Chen, Y.Y., Kuo, S.H.: CLB: a novel load balancing architecture and algorithm for cloud services. Comput. Electr. Eng. 58, 154–160 (2017)CrossRefGoogle Scholar
  9. 9.
    Gupta, H., Sahu, K.: Honey bee behavior based load balancing of tasks in cloud computing. Int. J. Sci. Res. 3(6) (2014)Google Scholar
  10. 10.
    Pham, N.M.N., Le, V.S.: Applying Ant Colony System algorithm in multi-objective resource allocation for virtual services. J. Inf. Telecommun. 1(4), 319–333 (2017)Google Scholar
  11. 11.
    Razzaghzadeh, S., Navin, A.H., Rahmani, A.M., Hosseinzadeh, M.: Probabilistic modeling to achieve load balancing in Expert Clouds. Ad Hoc Netw. 59, 12–23 (2017)CrossRefGoogle Scholar
  12. 12.
    Cao, Z., Lin, J., Wan, C., Song, Y., Zhang, Y., Wang, X.: Optimal cloud computing resource allocation for demand side management in smart grid. IEEE Trans. Smart Grid 8(4), 1943–1955 (2017)Google Scholar
  13. 13.
    Singh, S.P., Sharma, A., Kumar, R.: Analysis of load balancing algorithms using cloud analyst. Int. J. Grid Distrib. Comput. 9(9), 11–24 (2016)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.COMSATS Univeristy IslamabadIslamabadPakistan

Personalised recommendations