Estimation of regions of attraction of power systems by using sum of squares programming

This paper presents an algorithm estimating the regions of attraction of power systems based on the Lyapunov function approach where a sublevel set of a Lyapunov function for a target system is used as the estimate. In particular, we focus here on the algorithm based on sum of squares (SOS) programming, which has been recently proposed, and aim to develop a simpler algorithm for the practical use. For this aim, we present an algorithm overcoming the difﬁculty of the SOS programming problem addressed in the existing study, i.e., the bilinear constraints, in a simpler way. In the proposed algorithm, two SOS programming problems are iteratively solved, and the number of the problems solved at each iteration is reduced to half of that in the existing algorithm. In addition, we theoretically analyze the proposed algorithm, and show the convergence under certain conditions. The performance of our algorithm is demonstrated by numerical examples.


Introduction
Stability analysis of power systems has been a major topic in the field of power engineering. This is because analyzing the stability of power systems leads to the safe and efficient operation of them. In fact, if the stability is not analyzed, the impact of accidents on the stability cannot be estimated and systems may unexpectedly become unstable. Moreover, in such a case, we have to make the operation of systems conservative, which decreases the efficiency.
A typical type of the stability of power systems is transient stability [13]. The transient stability is the ability of power This  Mitsubishi Motors Corporation, 1-1 Mizushimakaigandori, Kurashiki, Okayama 712-8501, Japan systems to maintain the synchronization of generators for transient disturbances such as faults and the loss of a portion of transmission networks. An index to evaluate the transient stability is the size of the region of attraction (ROA). The ROA is the set of all system states from which the system converges to an equilibrium state. This is illustrated in Fig. 1. If the state after the disturbances is in the ROA, then it again converges to the equilibrium state. That is, the larger ROA means the higher transient stability.
A straightforward method to investigate the ROA is timedomain simulation [11] with differential equations describing system dynamics. This provides the exact ROA, but is impractical for large-scale systems due to high computational costs. A promising method to solve this problem is the Lyapunov function approach, i.e., to find a Lyapunov function for a target system and use the sublevel set as an estimate of the ROA. This method does not require computing the trajectory of the system for each initial state, and thus its computational cost is low. Motivated by this, many studies have been conducted so far. For example, much effort has been devoted to studying methods utilizing energy functions [4,5,9,15]. Also, there are studies on methods utilizing nonlinear control theory [10], extended Lyapunov functions [3], and numerical optimization [1].
Among these studies, [1] has proposed an algorithm calculating a Lyapunov function for a given power system by using sum of squares (SOS) programming [6,7]. This algorithm is promising for the following two reasons. First, the algorithm is available for power systems with transfer conductances. Transfer conductances correspond to losses in power systems, and cannot be neglected in practice. However, existing studies have assumed that transfer conductances are zero [5,9,15] or small [3,4]. Meanwhile, the algorithm in [1] is available without assumptions on transfer conductances. Second, the algorithm in [1] enables us to systematically obtain Lyapunov functions. As a result, the time and effort spent to find Lyapunov functions are reduced. On the other hand, the algorithm in [1] is complicated. In fact, the algorithm is composed of two loops, and we have to iteratively solve four different types of SOS programming problems in a loop. Such complexity is undesirable for the users because the time and effort spent to understand and implement the algorithm increase. Moreover, the complexity makes the analysis of the algorithm difficult. This leads to the lack of the theoretical guarantees of the algorithm. In fact, [1] has not provided a theoretical result on the behavior (e.g., the convergence) of the algorithm.
The purpose of this paper is to develop a simpler algorithm estimating the ROA than the algorithm in [1]. For this purpose, we make two contributions. First, we present an improved algorithm. In our algorithm, two SOS programming problems are iteratively solved, and the number of the problems solved at each iteration is reduced to half of that in the algorithm in [1]. The key idea is to consider a simpler method to overcome the difficulty of the SOS programming problem addressed in [1]. The problem addressed in [1] includes bilinear constraints, and thus cannot be efficiently solved by numerical optimization techniques. To overcome this difficulty, [1] has introduced four subproblems, while we overcome it by only two subproblems. Second, by utilizing the simple structure of the proposed algorithm, we analyze it and clarify its behavior. As a result, we guarantee that there exist solutions to the two SOS programming problems solved at each iteration, and show that the algorithm converges under certain conditions. This provides a theoretical guarantee for the proposed algorithm.
Notation: Let R, R + , and R 0+ be the real number field, the set of positive real numbers, and the set of nonnegative real numbers, respectively. Both the zero scalar and the zero vector are expressed by 0. For the number c ∈ R and the function f : R n → R, L c ( f (x)) denotes the sublevel set of f , i.e., L c ( f (x)) := {x ∈ R n | f (x) ≤ c}. We denote by P the set of polynomials. Moreover, for x ∈ R n , let P 0 := { p(x) ∈ P | p(0) = 0}, and let P + be the set of positive definite polynomials, i.e., P + : Finally, we use deg( p) to represent the degree of the polynomial p.

Problem formulation
Consider the power system in Fig. 2, composed of n generators.
From the swing equation, the dynamics of generator i (i ∈ {1, 2, . . . , n}) is described by where δ i (t) ∈ R is the phase angle of the generator voltage, δ(t) ∈ R n denotes the phase angles of all the generator voltages, i.e., δ(t) := [δ 1 (t) δ 2 (t) · · · δ n (t)] , M i ∈ R + is the moment of inertia, P mi ∈ R is the mechanical input, and D i ∈ R + is the damping coefficient. The variable P ei (δ(t)) ∈ R is the electrical output given by where E i ∈ R + is the generator voltage and B i j , C i j ∈ R are the susceptance and conductance between generators i and j, respectively. (1) and (2), the state equation of the system is of the form . . .
(4) Furthermore, for simplicity, we assume that an equilibrium point of is x = 0; otherwise, we perform a coordinate transformation so as to shift the equilibrium point to x = 0.
Then, we address the following problem.
Problem 1 For the power system , assume that the equilibrium point x = 0 is asymptotically stable. Then, estimate the region of attraction (ROA), i.e., the set of all x ∈ R 2n−1 such that the solution of starting from x converges to the equilibrium point.

Estimation of region of attraction by using sum of squares programming [1]
As a method to solve Problem 1, [1] has proposed an estimation method of the ROA based on sum of squares (SOS) programming. In this section, we briefly introduce it.

Preliminary
For the power system , the Lyapunov stability theory (see, e.g., [8]) yields the following result.
Lemma 1 Consider the power system with the asymptotically stable equilibrium point x = 0. Assume that there exist a set D ⊂ R 2n−1 containing x = 0 and a continuously differentiable function V : D → R 0+ such that The function V (x) is called the Lyapunov function. Lemma 1 means that L c (V (x)) ⊆ D is an estimate of the ROA. That is, this result reduces Problem 1 to the problem of finding a pair (V (x), c) satisfying (5), (6), and L c (V (x)) ⊆ D for a set D.

Estimation algorithm of region of attraction based on sum of squares programming
The authors in [1] have proposed the use of SOS programming to find appropriate V (x) and c.

Sum of squares programming problems
We first define the SOS.
SOS programming problems are optimization problems with constraints that polynomials must be SOS ones. This type of problems can be transformed into semidefinite programming problems [2] under certain conditions, and can be efficiently solved by numerical optimization techniques.

Coordinate transformation
However, the idea of the SOS cannot be directly applied to the system because (3) is not described by polynomials due to the trigonometric functions in (4). Hence, [1] has performed the coordinate transformation z(t) Here, for the convenience of explanation, the numbering of z i (t) (i = 1, 2, . . . , 3n −2) is different form that in [1]. From (3), (4), and (8), the transformed system is written as for In (9), g(z(t)) = 0 is the constraint imposed as the property of the trigonometric functions, i.e., sin 2 x i + cos 2 x i = 1 for every x i ∈ R. We see from (9)-(12) that the transformed system is described by polynomials.
For this system, if we find a pair (V (z), c) satisfying (5), (6), and L c (V (z)) ⊆ D where x corresponds to z, by SOS programming, then from Lemma 1 we can use L c (V (h(x))) as an estimate of the ROA of the original system (3).

Estimation algorithm
The estimation algorithm in [1] is called the expanding interior algorithm. This has been originally introduced in [6], and the idea is explained as follows. Consider the set L γ ( p + (z)) ⊂ R 3n−2 where p + (z) ∈ P + is a positive definite polynomial and γ ∈ R + is a positive number. If L γ ( p + (z)) ⊆ L c (V (z)), then we can expand L c (V (z)) by expanding L γ ( p + (z)) as illustrated in Fig. 3, which will present a better estimate of the ROA. Hence, for a given Based on this idea, we consider the following optimization problem: where p + (z) and c are assumed to be given. The first, second, and third constraints correspond to (5) (6), respectively, where L c (V (z)) corresponds to D. By replacing z = 0 with q + (z) = 0 where q + (z) ∈ P + and applying the Positivstellensatz theorem (see, e.g., [7]) to the resulting constraints, we obtain the following SOS programming problem: where S is the set of SOS polynomials, v 1 (z), v 2 (z), v 3 (z) ∈ P n−1 are polynomial vectors, s 1 (z), s 2 (z), s 3 (z) ∈ S are SOS polynomials, and q + (z) is assumed to be given. The constraints (13)-(15) correspond to sufficient conditions for satisfying the three constraints of the OP, respectively. Since the SOSP contains the products of the variables such as s 2 (z)V (z), it cannot be transformed into a semidefinite programming problem and cannot be efficiently solved. Therefore, [1] has proposed the following algorithm. Here, k ∈ R + and ∈ R + are the iteration indices, and superscripts are used to represent the variables at each iteration. For instance, V k (z) denotes V (z) at iteration k. Note that the following algorithm has been given as a modified expanding interior algorithm, and the original algorithm can be found in [6]. Also, there are some variations of the expanding interior algorithm in [7].
Save the resulting c as c k .
Step 4 Set γ := γ k , s 2 (z) := s k 2 (z), and s 3 (z) := s k 3 (z), and solve the SOS programming problem Save the resulting c as c k .
Step 5 Set c := c k , s 2 (z) := s k 2 (z), and s 3 (z) := s k 3 (z), and solve the SOS programming problem Save the resulting γ and V (z) as γ k and V k (z), respectively.
Step 8 Output V k (h(x)) and c k .
The flowchart of Algorithm 1 is illustrated in Fig. 4. The algorithm is composed of the two loops. In Loop 1, we solve four SOS programming problems, i.e., SOSP1-SOSP4 at each iteration k for obtaining a solution to the SOSP. The SOSP1 is for determining c to be given in the SOSP2. This problem can be solved by the linear search for c. In fact, the constraints (14) and (15) include no products of the variables when V (z), γ , and c are fixed, and thus the problem of finding v 2 (z), v 3 (z), s 1 (z), s 2 (z), and s 3 (z) satisfying the constraints can be solved as a semidefinite feasibility problem. The SOSP2 is for finding a better γ . In a similar way to the above, we can show that this problem also can be solved by the linear search for γ . The SOSP3 and the SOSP4 play similar roles to the SOSP1 and the SOSP2, respectively, where s 2 (z) and s 3 (z) are fixed instead of V (z). In summary, we fix some variables as an approach to the problem of the products of the variables. Meanwhile, in Loop 2, we update p + (z) and γ 0 based on V (z) and c obtained in Loop 1. This is because setting L γ ( p + (z)) := L c (V (z)) and solving again the SOSP will give a better estimate of the ROA as seen from Fig. 3. ; that is, c would not change. However, the algorithm is not stuck because V (z) is updated so as to increase γ in the SOSP4.

Problem to be considered
As mentioned in Sect. 1, the above algorithm is complicated. In fact, the algorithm consists of the nine steps (Steps 0-8), and we have to iteratively solve the four SOS programming problems (SOSP1-SOSP4), while changing the fixed variables. Such complexity causes the increase of the time and effort spent to understand and implement the algorithm, which is undesirable in practice. In addition, due to the complexity, it is difficult to theoretically analyze the behavior of the algorithm.

Improvement of algorithm
Now, we present a simpler algorithm to solve the SOSP.

Start
Steps 2-5 Step 1 No Yes

No
Step 8 Yes End and the largest absolute value of the coefficients of is smaller than .

Proposed algorithm
As explained in Sect. 3.2.3, the SOSP includes the products of the variables, and as a result, it cannot be efficiently solved as a semidefinite programming problem. From the above discussion and the fact that the products are s 1 (z)γ , s 2 (z)V (z), and s 3 (z)V (z), we can solve the SOSP by the linear search for γ if s 2 (z) and s 3 (z) are fixed. Therefore, we consider the two SOS programming problems: -a problem to determine s 2 (z) and s 3 (z) in addition to c, -a problem to find V (z) maximizing γ , and alternately solve them.
In the modified algorithm, we only have to solve the SOSP1 and the SOSP4. The SOSP1 is for determining c, s 2 (z), and s 3 (z), and the SOSP4 is for finding V (z) maximizing γ . This algorithm does not need the steps to solve the SOSP2 and the SOSP3, and thus is simpler than the original one.
Furthermore, we choose the initial Lyapunov function V 0 (z) in Step 0 by solving the following problem: where β ∈ R + is a positive number, r + (z) ∈ P + is a positive definite polynomial, and q + (z), β, and r + (z) are assumed to be given. The first constraint corresponds to (13), that is, the first constraint of the OP. Meanwhile, the second constraint corresponds to the third constraint of the OP. In fact, this is derived by substituting s 3 (z) = 1 for the constraint (15) of the SOSP and by replacing c and V (z) with β and r + (z), respectively. By imposing these constraints, we can obtain a Lyapunov function satisfying (5) and (6). Also, the SOSP0 is an SOS feasibility problem, and does not include the products of the variables. Thus, this problem, can be efficiently solved as a semidefinite feasibility problem.

Analysis
For the proposed algorithm, we obtain the following result.
Proof (i) We prove the statement by showing the following three facts for the four cases of = 1 and k = 1, = 1 and k > 1, > 1 and k = 1, and > 1 and k > 1. The proofs of (a)-(c) for those cases are given in Appendix A.
In Theorem 1, (i) guarantees the existence of solutions to the SOSP1 and the SOSP4 at each iteration k ∈ {2, 3, . . .}, and further clarifies the behavior of γ k . More precisely, γ k is monotone nondecreasing with respect to k ∈ {0, 1, . . .} for each ∈ {1, 2, . . .}. In this sense, the estimate of the ROA does not become worse in Loop 1. Meanwhile, (ii) means that when is updated, the resulting L c (V (z)) does not become smaller than the previous one; that is, Loop 2 also does not make the estimate of the ROA worse.
From Theorem 1, the proposed algorithm works as long as there exists a solution to the SOSP1 at k = 1, and it converges if the ROA is bounded. The convergence is proven as follows. From (18), γ k converges as k → ∞ for each ∈ {1, 2, . . .} if the ROA is bounded. Moreover, for a bounded ROA, p + (z) converges as → ∞. In fact, L c (V (z)) does not become smaller in Loop 2 from (ii) in Theorem 1, and thus L γ ( p + (z)) is asymptotically close to L c (V (z)) if the ROA is bounded (see Fig. 3). These two facts prove the convergence of the proposed algorithm.
Theorem 1 does not guarantee the optimality of the obtained solution, but is reasonable for the SOSP. In fact, as explained in Sect. 3.2.3, the SOSP cannot be transformed into a semidefinite programming problem, and thus it is difficult to directly obtain the optimal solution.

Examples
In order to demonstrate the performance of the proposed algorithm, we provide examples for the two power systems discussed in [1].

Consider the power system
This is a power system without transfer conductances, composed of three generators, which has been called Model A in [1]. In (19), for simplifying the description, the relative phase angles and angular velocities to generator 3 are chosen as the state variable vector, i.e., x(t) : We further set p 0 + (z) := 6 i=1 z 2 i −1.5z 1 z 2 −0.5z 5 z 6 , γ := 0.05, and p := 0.2. Figure 5 illustrates the time evolution of γ k at = 1. This shows that γ k increases as k increases, which demonstrates (18) in Theorem 1. Figure 6 also illustrates the time evolution of L c k (V k (h(x))) and L γ k ( p + (h(x))), where = 1, (x 3 , x 4 ) = (0, 0), and the thick and thin lines express the boundaries of L c k (V k (h(x))) and L γ k ( p + (h(x))), respectively. We observe that L γ k ( p + (h(x))) ⊆ L c k (V k (h(x))) holds at each k and that L c k (V k (h(x))) and L γ k ( p + (h(x))) become larger as k increases.
The proposed algorithm then outputs   (x)))) is depicted in Fig. 7 where (a) is for 1), the thin line expresses the estimate given in [1], i.e., the estimate obtained by Algorithm 1, and the gray area represents the true ROA. We see that the resulting estimate is included in the true ROA. In addition, by numerical integration, the volume of the estimated sets is calculated as 2.28 × 10 2 for the proposed algorithm and 2.02 × 10 2 for Algorithm 1. Thus, we conclude that the estimate of the ROA by the proposed algorithm is better than that by the existing algorithm.
We also set p 0 Figure 8 illustrates the time evolution of γ k at = 1. Figure 9 also illustrates the time evolution of L c k (V k (h(x))) and L γ k ( p + (h(x))) at = 1 in the same way as that in Fig. 6. We see that similar results to those in Sect  Figure 10 depicts the resulting estimate of the ROA (that is, L c (V (h(x)))) in the same way as that in Fig. 7, where (a) is for (x 3 , x 4 ) = (0, 0) and (b) is for (x 3 , x 4 ) = (7.5, 7.5). It turns out that the resulting estimate is included in the true ROA. Moreover, by numerical integration, the volume of the estimated sets is calculated as 1.97 × 10 3 for the proposed algorithm and 1.67 × 10 3 for Algorithm 1. This shows that the estimate of the ROA by our algorithm is better than that by the existing algorithm also in this case.

Conclusion
This paper has considered an algorithm estimating the ROA of a given power system by using SOS programming. By simplifying the existing method to handle the bilinear constraints of an SOS programming problem, we have presented a simpler algorithm than the existing one. In addition, we have analyzed the algorithm, and shown the convergence under some conditions. These results provide an estimation algorithm of the ROA which is simpler and with a theoretical guarantee. A future work is to extend our algorithm to power systems with uncertain parameters. Also, we should consider the extension to controller design for improving the transient stability of power systems.