Preference based multi-objective reinforcement learning for multi-microgrid system optimization problem in smart grid

Grid-connected microgrids comprising renewable energy, energy storage systems and local load, play a vital role in decreasing the energy consumption of fossil diesel and greenhouse gas emissions. A distribution power network connecting several microgrids can promote more potent and reliable operations to enhance the security and privacy of the power system. However, the operation control for a multi-microgrid system is a big challenge. To design a multi-microgrid power system, an intelligent multi-microgrids energy management method is proposed based on the preference-based multi-objective reinforcement learning (PMORL) techniques. The power system model can be divided into three layers: the consumer layer, the independent system operator layer, and the power grid layer. Each layer intends to maximize its benefit. The PMORL is proposed to lead to a Pareto optimal set for each object to achieve these objectives. A non-dominated solution is decided to execute a balanced plan not to favor any particular participant. The preference-based results show that the proposed method can effectively learn different preferences. The simulation outcomes confirm the performance of the PMORL and verify the viability of the proposed method.


List of symbols
Positive/negative value in percentage at various tariff l b n (t) Value of the baseload m Objective index n Microgrid index p g (t) Total power flow between microgrids and main grid p d n (t) Load demand p g n (t) Power flow between main grid and microgrid p r n (t) Renewable energy generation Q(·) Q value r m (·) Reward function for each objective r m, pr e( · ) Preference reward function s State s n (t) State of charge of the battery SoC State of charge S Q(·) Scalar Q value t Time index T oD Time of day r m, nor( · ) Normal reward function

Introduction
The average annual domestic standard electricity bills by households and non-home suppliers have been increased £707 in 2020, based on an annual consumption of 3600 kWh [1]. The non-hydro renewable energy source (RES) such as solar, wind, tidal and geothermal energy continue to enter the electricity market substantially. The percentage of RES increased from 10.3% in 2008 to 18% in 2018 in the EU [2]. It is well known that wholesale price fluctuation is the essential feature of deregulation in the electricity market. Energy buyers who are sensitive to electricity price may change their consumption habits according to dynamic price signals [3]. This means that dynamic energy tariffs can decrease energy demand during peak load periods and increase valley loads. While the use of RES and energy storage can significantly reduce the use of fossil fuels, thereby reducing power generation costs and greenhouse gas emissions.
In the dynamic tariff design, extensive research on demand-side management is carried out. A demand response method based on dynamic energy pricing is proposed in [4], which realizes the optimal load control of equipment by building a virtual power trading process. A smart grid decision-making model considering demand response and market energy pricing is proposed to interact between market retail price and energy consumers [5]. In [6], a cooperative operation procedure of the electricity and gas integrated energy system in a multi-energy system is proposed to develop the system performance and optimize the power flow. However, the above demand side management (DSM) research only optimizes energy prices from the perspective of operation, and does not recognize the impact of electricity market price changes and consumer demand fluctuation. In addition, most of the current papers only study singleobjective optimization problems, such as modelling demand figure [7], maximizing customer utility [8] and reducing total cost [9]. When planning a multi-microgrid system, there will be a coupling interaction among power grid, independent system operator (ISO) and microgrids. These participants usually have some conflicts in the planning process. The impact of the dynamic tariff on a multi-microgrid system with a multi-objective problem has not been fully investigated.
For a comprehensive design and coordination of all participants, we consider designing a multi-microgrid system, including three microgrids, one independent system operator (ISO), and one main power grid [10]. In general, microgrids are disconnected from each other with no exchange of renewable energy power. In this multi-microgrid system, a dynamic tariff scheme is implemented to evaluate the system performance of all participants. It is necessary to use a multi-objective optimal method to balance the requirement of all participants without biasing towards any single one. In [11], a multi-objective genetic algorithm (MOGA), which adapts some changes to the physical features of the load dispatch problem, is utilized to address a multi-objective problem to optimize the time distribution of domestic loads within the 36-h time-period in a smart grid scenario. An energy optimization method, based on multi-objective winddriven optimization method and multi-objective genetic algorithm, is employed to optimize operation cost and pollution emission with/without the involvement in hybrid demand response programs and incline block tariff [12]. However, the genetic algorithm needs many iterations to obtain good convergence results, the reinforcement learning can train policies in advance and obtain the optimal solution faster based on the trained policies [13].
Multi-objective reinforcement learning (MORL) is an excellent algorithm that can solve multi-objective problems of complicated strategic intercommunications. Reinforcement learning algorithms learn policies when interacting with the environment, while evolutionary algorithms do not do. In many cases, reinforcement learning algorithms can use the interactive details of individual behaviours to be more effective than evolutionary algorithms. Although evolution and reinforcement learning algorithms share many features and naturally work together, they can autonomously learn with experience and adaptively reuse data pulled from relevant problems as prior knowledge in new tasks. However, evolutionary algorithms ignore most of the advantageous structure of reinforcement learning problems. Such information should enable algorithms to achieve more effective searches [14]. In [15], the reinforcement learning environment is usually formalized by adopting the Markov Decision Process (MDP). A Q-learning algorithm is introduced to iteratively approximate the best Q value [16]. In a multiobjective optimization problem, the objectives contain two or more dimensions, and the conventional MDP will be generalized to multi-objective MDPs. The common straightforward approach is to transform the multi-objective problem into a standard single-objective problem using a scalar function [17]. Most MORL methods rely on a single-policy strategy to learn the Pareto optimal solution [15,18,19].
However, this transformation may not be suitable for solving the nonlinear problem in the non-convex domain at the Pareto front. In addition, when multi-objective problems are investigated, MORL methods based on the Pareto-optimality criterion may not accomplish a meaningful search. Incorpo-rating preferences to the MORL optimization enhances the specificity of the selection and facilitates better decisions that consider all participants. Accordingly, the solutions will focus on preferred alternative areas, and it is unnecessary to generate the entire Pareto optimal set with equal accuracy. This article developed a preference-based MORL algorithm (PMORL) to achieve high-quality solutions with nonlinear multi-objective functions. The proposed PMORL adopts the L p metric to design a balanced multi-microgrid system plan in terms of the approximate Pareto front (APF). To the best of our knowledge, it is the first time for PMORL to be employed in a multi-objective optimization scenario. The system planner implements the Pareto front to examine the connection and importance among different objective functions, which can provide the system planner with an option that is fair to all participants. The main three contributions of this article are as follows: (1) This paper combines real-time dynamic energy tariff for actual planning scenarios, considering the impact of realtime fluctuation in energy tariff and renewable energy on the design of a multi-microgrid system. Three conflict objectives are proposed for a multi-microgrid system in this paper: maximizing sales revenue from main grid suppliers, maximizing the life of energy storage, minimizing energy consumption costs of consumers. (2) We have developed a MORL algorithm based on the L p metric to solve this multi-objective problem that considers dynamic energy tariff and energy storage operations (such as charge/discharge/idle). It can provide the entire Pareto front if enough exploration is given. The performance of the proposed algorithm is verified by comparing multi-objective genetic algorithm (MOGA) and preference-based MOGA (PMOGA). (3) An extended MORL algorithm using a preference model based on the Gaussian process is proposed to design a self-governing and rational decision-making agent and control the multi-microgrid system. The preferences of individuals in the same selection are essential for simulating human decision-making behaviour. The human's emotional system is capable of adjusting the perception and evaluation of cases.
The rest of this article is arranged as follows. Section 2 outlines the main outline of the multi-microgrid system and explains the mathematical models for the three participants. The multi-objective problem is presented in Sects. 3, 4 describes the proposed preference-based MORL method in detail. In Sect. 5, the approximate Pareto front and dynamic tariff based on the experimental results are given. Finally, the conclusion is discussed in Sect. 6.

Multimicrogrid description
This paper is concerned with the design of a high-level three-microgrid optimization system. An information and communication technology (ICT) system is performed to transfer the information among the three microgrids, including the load demand, energy tariff and renewable energy generation. The mathematical models of the multi-microgrid system, including the microgrid, the ISO and the main power grid, will be described in detail in the following subsections. Let N = 1, 2, . . . , N be the set of microgrids and N s = 1, 2, . . . , N s be the set of microgrids with energy storage system, where N s ≤ N .

Microgrid model
The microgrid system model shows the power balance among energy storages (if available), local energy generation, other microgrids, and main power grid. For microgrid n without an energy storage system, the mathematical model can be given as If p g n (t) is positive, the power flows from the grid to the microgrid n, otherwise, the power flows from the microgrid n to the grid, i.e., sell the extra electricity to the main power grid.
For microgrid n with an energy storage system, the power balance equation is given by where (3a) represents the constraints of maximum charging/discharging rating power. 3b) is the constraints of the maximum capacity of the storage. Note that we do not consider self-discharge effect of the energy storage system because the energy loss in a short-term period is too small to be negligible [20]. Considering the shiftable loads, the load demand term p d n (t) can also be given as where l b n (t) is equal to the load demand in (2) without considering the shiftable loads. h(λ(t)) = a 1 λ(t) 2 + a 2 λ(t) + a 3 and p d n (t) = (1+h(λ(t))l b n (t)) is the load demand based on the baseload, n = 1, 2, . . . , N is the index of microgrid. The baseload forecasting technology can achieve high-precision forecasting outcomes because there are almost no fluctuations in practice for the baseload. Therefore, we presuppose that l b n (t) is a known data in advance. Different domestic consumers may have different respo nses to the same tariff. Different tariff plans can be established by choosing an objective function of microeconomics [21]. For each consumer, the objective function means the consumer's comfort corresponding to the total power consumption. Up-to-date investigations show that certain objective functions can precisely trace the behaviour of energy consumers [22]. The overall objective function of multi-microgrid can be demonstrated as [23,24] max λ(t) : , ω n ) is corresponding to the marginal benefit which is concave [22,25]. The different power consumption p d n (t) responses of a consumer with a marginal benefit to different electricty prices λ(t). f c (λ(t), p d n (t)) is inflicted by the electricity provider. For example, a use that consumers p d n (t) kW electricity during the time period between t and t + 1 at a rate of λ(t) is charged λ(t) * p d n (t). b n (t) is the base load. ω n is the parameter that can change between consumers and at different time intervals of the day. α, β and γ are the pre-determined coefficients to be calibrated [26]. Every consumer tries to adjust the energy usage to maximize its welfare for each displayed tariff λ(t) at time t. This can be achieved by placing the derivative of F w to zero, which means that the consumer's marginal revenue will equal the advertising tariff.

ISO model
The ISO described in this subsection mainly acts as an emergency power provider to support emergency demand response plans. In general, the ISO will store as much energy as possible to reach a safe level. In order to provide maximum emergency power and extend battery life, the objective function can be expressed as:

Power grid model
The main power grid releases energy into the microgrid when renewable energy generation is insufficient. However, when there is a surplus of renewable energy in the microgrid, it can also absorb electricity from the microgrid. The objective problem of the main power grid model can be given as The derivation of the maximum interest of the main power grid based on the power distribution p g (t) can be denoted as max λ(t), p gn (t) : where a g > 0 and b g , c g ≥ 0.
Load demand and renewable generation data are based on that of the Penryn Campus, University of Exeter. The university office of general affairs acts as the ISO to buy energy from the utility company and connect the energy storage system and renewable energy sources (RESs) to create a timevarying electricity tariff. The current electricity tariff of the campus is fixed. If the energy tariff varies at different times, students may adjust their electricity consumption habits for household appliances to reduce energy bills. Students living in student apartments will decide when to use various electrical appliances such as washing machines and dryers based on dynamic electricity tariffs. In addition, the university office can manage the time-varying electricity tariff to decrease the peak load demand, optimize the energy storage system operation, and reduce the energy purchase from the utility companies. The design of this scenario has very practical significance for the operation of the smart microgrid, especially when smart meters are installed in every household. In 2019, nearly 1 million smart meters were installed in British households [27]. As of Jun 30 2020, smart and advanced meters increased to 21 million in homes and small businesses, of which 17.4 million were in a smart mode [28].

Multi-objective problem formulation
In this section, a multi-objective problem (MOP) formula will be proposed to design and maximize the benefits of three objectives for a multi-microgrid system. The following content will discuss the definition of the Pareto Optimality.
In order to solve the three objectives F w , F s and F g mentioned above simultaneously, a MOP formula is rewritten as subject to (1)−(4), (8) and (9) where λ(t) and p g n (t) are the two variables correlated with the ISO and they are restricted by the current renewable energy generation and the charging/discharging status of energy storage between time t − 1 and t. A supplementary function is presented to solve the problem considering all the constraints as bellow: where the stored energy in the energy storage system manages F a . When all constraints is satisifed, if and only if F a = 0. Otherwise, F a is equal to a large positive penalty coefficient. In terms of the formulation (14), the MOP in (12) can be revised as max λ(t), p gn (t) To resolve the MOP, the Pareto optimality is employed to prove the performance. The general discussion can be seen as follows.
Definition -Pareto Dominance Let H(x) be a MOP function and is a feasible solution space. The MOP is optimized to obtain a solution u ∈ that satisfies the MOP function H(x). It is defined that solution u dominates u (written as u ≺ u ) if H i (u) ≤ H i (u ) holds true for all i and at least one i has H i (u) < H i (u ). It means that if a solution is better on one objective function and equal on other objective functions, this solution is better than others.
Definition -Pareto Optimal If there is no feasible solution u ≺ u * in the solution space that dominates it, then the solution u * is Pareto optimal.
Definition -Pareto Optimal Set P * = {u * ∈ } is defined as the Pareto optimal set of the MOP, which means the solution set of all Pareto optimal.
Definition -Pareto Front The Pareto front is the boundary determined by the set of all solutions mapped by the Pareto optimal set.

Multi-objective reinforcement learning
To obtain the Pareto front for the MOP, a multi-objective Q-learning framework is introduced in this subsection. This MORL structure is based on a single-policy strategy that applies scalarization functions to decrease the dimensionality of the MOP. In other words, the problem is solved by converting the multi-objective problem into a single-objective problem.
A scalarization function can be described as where x and w are the Q-value vector and the weight vector in the Q-learning environment, respectively. The scalar Q value in a single-objective problem is replaced by a Q vector that includes different Q values for all objectives, such as: A single and scalar Q-value value S Q(s, a) is obtained as: where all weight values w m should satisfy M m=1 w m = 1. However, the estimated S Q(s, a) value has a major weakness in that the Pareto front can only be found in the convex region based on the linear scalarization [29,30]. For multiobjective optimization problems, the weighting coefficients in the three objective functions can be set equal and normalization method can also be utilized to avoid favoring a particular participant. However, when the multi-objective optimization problem has a concave Pareto front (PF), both methods may not be effective. Even if PF is convex, it can introduce other challenges by using utility functions derived from various weights to approximate PF [30][31][32]. Therefore, this paper develops the scalar function by adapting the L p metric to solve this issue [33]. The L p metric measures the distance between the utopian point z * and the selected point x in the multi-objective space. z * is an adjustable value in the iteration process. The L p metric between (x) and (z) * for each function can be measured by where 1 ≤ p ≤ ∞. If p = ∞, the metric can be acknowledged as the weighted L ∞ or the Chebyshev metric x m can be substituted by Q m (s, a) to update the S Q(s, a) for the multi-objective problems The elements of RL are explained below. These include state space, action space, and reward functions, including learning and exploration rates, and discount factors.

State Space
The state space is time of day (T oD j ) and State of charge (SoC k ).

Action space
Action space is a mixture of tariff and charging/discharging/ idle status.
where the T ari f f is discretized into 8 values from 1.5 to 5.0 and the StorageCommand into three values: Charge, discharge and idle.

Reward
The reward value r m (t) for each objective is the stimulation obtained by taking an action while at state s. The reward function is created to maximize the objective function. All obtained reward values will be updated to the expanded Q table accordingly.
where r m,nor (s, a m ) is corresponding to the value of each objective function F m (e.g., F w , F s , F g ). C m is a constant value for each objective which avoids favouring a particular participant. In terms of the S Q(s, a) table, the action selection policy is updated and the appropriate action can be chosen to receive the maximum reward, such as scalar greedy strategy. The detailed scalar greedy strategy in this paper can be discovered in Algorithm 1.

Preference-based multi-objective reinforcement learning
Essential RL considers a scenario where an agent runs in state space by executing different actions. Reward signals provide the agent with feedback about its behaviour. The aim of RL is to maximize the expected total rewards. However, the computational cost of comprehensive interactions among different objectives with a decision-maker is expensive. Therefore, extending the essential reinforcement learning framework is necessary by using a preference learning model. The basic idea of the proposed preference model is to prefer various reward functions in terms of a human's emotional system. In this paper, the proposed PMORL employs a preference reward function to enable the agent to learn and perceive different preferences. The preference reward function is introduced to learn various policies based on a Gaussian distribution. We have used a multi-objective Qlearning algorithm with a scalar greedy strategy to discover the optimal policy. The employed preference reward function leads to a bias for one particular objective, which is most common to choose good actions for this specific objective while reducing the probability of selecting good actions for other objectives.
Then the reward function of each objective for updating the scalar SQ(s,a) table can be revised as follows. s, a m ) = r m,nor (s, a m ) + r m, pre (s, a m ). ∀ m ∈ M (25) For r m, pre (s, a m ) ∼ N (μ m (s, a m ), σ 2  m (s, a m )), where the term N (μ m (s, a m ), σ 2  m (s, a m )) is the normal distribution, μ m (s, a m ) is the mean and σ (s, a m ) is the standard variation. For example, when an action preference model (reward) with μ 2 (s, a 2 ) = 20, μ 1 (s, a 1 ) = μ 3 (s, a 3 ) = 1 and  σ 1 (s, a 1 ) = σ 2 (s, a 2 ) = σ 3 (s, a 3 ) = 1 is applied. Introducing a preference model for action a 2 based on specific target F s will lead to bias against action a 2 . In other words, the probability of action a 2 being selected is high while the probability of actions a 1 and a 3 being selected will be reduced.
The proposed PMORL strategy is explained in Algorithm 2. First, three Q m (s, a) tables for each objective and one SQ(s,a) table are initialized. Then the algorithm starts each episode beginning with state s and picks action via the scalar greedy strategy. Once the action is taken, the agent will land to a new state s and generate three reward values r m (s, a) in equation (25) for each objective. In other words, these reward values are calculated independently for each objective. Then the scalar SQ(s,a) will be updated on the determined action via (21). And the next state s is determined and new action a will be taken to repeat steps 4-11 until the termination condition is met.

Algorithm 2:
Multi-objective Q-learning algorithm 1: Initialise Q m (s, a) and SQ(s,a) 2: for each episode do 3: Initialize state s 4: repeat 5: Select action a using scalar greedy strategy 6: Take action and observe new state s ∈ S 7: Obtain reward vector r and select new action 8: for each objective m do 9: Q m (s, a m ) = Q m (s, a m ) + α t (r m (s, a m ) +γ Q m (s , a m ) − Q m (s, a m )) 10: end for 11: update SQ(s,a) 12: s → s 13: until s is terminal 14: end for

Simulation results and performance
The simulation results are demonstrated to evaluate the performance of the proposed PMORL algorithm. In the experimental environment, three microgrids (N = 3) are considered in which two of them have energy storage (N ∫ = 2). The sizes of the two energy storages are 250 kWh and 200 kWh, respectively. Let s n be set to 10% of the capacity for each storage. The average power demand responding to tariff λ can be achieved as explained in [34]. The baseload l b n is from the Penryn Campus, University of Exeter. The total baseload and renewable generation on Nov 17, 2019 are displayed in Fig. 1. The actual load demand can fluctuate according to the tariff when the price signal changes. The baseload of three microgrids has been given in Fig. 2. Figure 3 presents a case of the preference-based results of Approximated Pareto Front (APF) that maximize the objective function F w . At the beginning of the iteration, all actions are randomly selected, and the optimal policy is also random. Therefore, it can be seen from Fig. 3 that the objective function corresponding to the randomly selected action in each iteration fluctuates wildly. However, when the number of iterations reaches 100, the simulation results begin to converge and stabilize slowly. Finally, after 150 iterations, the results converge to the optimal results. There are inevitable fluctuations after 150 iterations because the action selection strategy still has a very low probability that some actions will be randomly selected. Nevertheless, the results of the other two Obviously, the objective functions are in conflict with each other, and it is impossible to find an optimal result that meets all objective functions. However, we can find an optimal solution that is biased towards a specific objective function or a compromise solution that is fair to all three objectives. Figure 4 presents the results of APF based on MORL. The experimental results show that there is a conflict between different objective functions. When F w is large, the other two objectives will deviate from their optimal values, and vice versa. Three different solutions p * 1 , p * 2 and p * 3 are the extreme dominance solutions of the three objective functions, respectively. This means that every solution will benefit every single objective function only. In order to ensure the fairness of all objective functions, a specific APF-based solution P * will be selected so as not to give any single objective an advantage. In Fig. 4, there is a relatively special solution P * in the Pareto optimal solution set, which is located in the centre of the Pareto optimal set. The distance between P * Fig. 5 APFs for different objectives by using the PMORL approach to the three objective functions is the same, which indicates that the provided point P * is a relative fair solution to three objective functions.
The Pareto optimal set of the test outcomes in Fig. 4 reveals that MORL can preference a single objective function or balance all objective functions. As the learned SQ(s,a) table comprises the experience of the agent without re-solving the decision-making problem, it can determine multi-objective issues quicker than conventional optimization methods. In short, all empirical results confirm the performance of MORL. However, the extension of multi-objective reinforcement learning is necessary to develop psychological and neurophysiological findings. For the sake of simulating human decision-making behaviour, the expert preferences based optimal policy is emulated. The favoring policy based on three objective functions is presented independently over 300 independent runs in Fig. 5. The performance in Fig.  5 has a good preference compared to the results in Fig. 4. The outcomes in Fig. 5 are straightforward and in line with our expectations. It shows that the extra rewards controlled by human's emotional states can introduce preferences for the optimal policy of traditional multi-objective reinforcement learning. This enables the smart grid designers to use the preference model to develop MORL agents with specific preferences. The extra reward functions could be used to simulate rational components of decision making while retaining the main reward process to maximize the expected objectives. Table 1 provide the average results for three objectives over 500 independent runs. It is clear that the PMORL can achieve the preference-based optimal results for each objective function as described in (12). The PMORL allows for developing agents with preferences and specific targets. The extra reward value (like Gaussian distribution) could simulate rational decision-making components while keeping the primary potential reward process to maximize the expected benefit. The PMORL can find the best solution area according Bold values indicate the optimal values of the preferred objective compared with others, respectively to the preference for the specific function. In order to verify the accuracy of the algorithm, we also compared the results of the MOGA and PMOGA. It can be seen from Table 2 that the experimental results of PMOGA and PMORL are very close. The results of both PMOGA and PMORL are better than those of MOGA. The proposed PMORL can achieve the best results. In addition, we also compared the running time of these three algorithms. Compared with MOGA's optimization time of 350 s and PMOGA's optimization time of 352 s, the trained PMORL can complete the iteration in a very short time and obtain excellent results. Grid designers can design different multi-objective optimization models according to their preferences. Fluctuations in tariff signals play an important role in smart grid energy management. Figure 6a shows that the proposed method can generate appropriate dynamic tariffs, and the energy storage system status is also demonstrated in Fig. 6b. Ideally, high electricity tariffs will produce peak reduction and discharge energy storage, while low electricity prices will fill the trough load and charge energy storage. The  Fig. 6a illustrate the relationship between electricity tariffs and energy storage systems. At hour 3, the electricity tariff is relatively high. Although the power storage is relatively low, the selected action based on the optimal policy does not charge the energy storage system but maintains the idle state. The high electricity tariff means that we need to buy more electricity from the grid. All three objectives desire to maximize their benefits, the ISO does not worry about tariffs and only concerns about emergency energy storage, the main grid only considers its own profit maximization, and consumers consider how to reduce electricity bills without affecting the use of household appliances. This result is not biased towards objective three, so the agent needs to try not to charge the energy storage system when the electricity tariff is high, and at the same time, in order to ensure the largest possible emergency energy, the agent will try not to discharge it.

Conclusion
In this paper, a preference-based multi-microgrid planning model considering dynamic electricity tariffs and renewable energy generation is proposed. Designing scenarios are analyzed through a preference-based multi-objective reinforcement learning algorithm to optimize energy storage operations and electricity tariffs. In addition, the dynamic tariff of the microgrid system is restricted by the power demand of the main grid, which takes into account the interests of all three objectives. The experimental outcomes reveal that the MORL algorithm can produce a fair and effective operation plan for all participants by controlling the operation of energy storage and modifying the real-time electricity tariff. Meanwhile, the proposed PMORL can introduce preferences for an optimal policy through additional reward functions and develop agents based on preference objectives for grid designers. It proves the ability of PMORL to learn the optimal control strategy, and the proposed PMORL can also be applied to other multi-objective environments. The coordinated operation of the microgrid system benefits to increase the utilization rate of renewable energy, improve the service life of energy storage batteries, decrease the operating cost of the microgrid, save electricity bills for consumers, and maximize grid profits.