A Domain Decomposition Approach to Solve Dynamic Optimal Power Flow Problems in Parallel

We propose a parallel solver for linear systems of equations arising from the application of Primal Dual Interior Point methods to Dynamic Optimal Power Flow problems. Our solver is based on the Generalised Minimal Residual method in combination with an additive Schwarz domain decomposition method as preconditioner. This preconditioner exploits the structure of Dynamic Optimal Power Flow problems which, after linearization, is given as a matrix with large diagonal blocks and only a few off-diagonal elements. These elements correspond to intertemporal couplings due to ramping and energy storage constraints and are partially neglected in order to solve the problem in parallel. We test our method on a large-scale optimisation problem and show that a parallel speedup can be obtained.


Introduction
While today's power grid infrastructure has been designed for centralised power generation in conventional power plants, the ever increasing share of renewable energy sources (RES) leads to an increasingly uncertain, volatile and decentralised generation. In order to ensure a reliable grid operation and to maintain today's security of supply, it is necessary to develop methods to simulate accurately a power grid at high temporal resolutions. This can be done by efficient methods for power grid optimisation, including an accurate consideration of the non-linear and nonconvex AC power flow constraints. And these are to be improved.
The task to find the optimal operating state of a given power grid, also known as Optimal Power Flow (OPF), is formalized as the minimisation of a cost function with respect to a vector of continuous optimisation variables like the generated active power and the generated reactive power. Dynamic Optimal Power Flow (DOPF) considers several OPF problems, each corresponding to one specific point in time. In this case, the power grid's power demand is time-dependent and additional constraints must be taken into account. These constraints couple some of the optimisation variables corresponding to different time steps. Among others, these couplings are introduced by energy storage facilities and ramping constraints for conventional power plants [20]. Since DOPF is a large-scale non-linear and nonconvex optimisation problem, solving it in parallel is of crucial importance.
For this kind of problems Primal Dual Interior Point Methods (PDIPM) have proven to be among the most powerful algorithms. The main computational effort, when PDIPM is applied, lies in solving linear systems of equations [14]. And this a task suitable to be carried out in parallel on multi-core CPUs.
There exist a few works on solving linear systems, that arise from PDIPM applied to DOPF, in parallel. One approach is based on parallel matrix factorization by means of Schur complement techniques [6,20]. Other strategies use Benders Decomposition to decompose the complete optimisation problem into smaller ones [1].
Our contribution is the use of a Schwarz Domain Decomposition Method as preconditioner for Krylov-type iterative methods for solving these linear systems of equations in parallel. The original Schwarz method was formulated in 1870 as theoretical tool for proving existence of elliptic partial differential equations (PDEs) on complicated domains [17]. Later on, modifications of it have been used as standalone iterative methods for solving PDEs and have become a standard technique for preconditioning Krylov-type methods in the context of PDEs [18]. We apply this technique to DOPF by decomposing the time domain of interest into several smaller subdomains, which allows the use of multiple processors for computation.

Dynamic Optimal Power Flow
We will now formulate the DOPF problem in a mathematically rigorous way. This formulation has already been stated in more detail by several authors, e.g. in [12,20] and [16]. We would like to mention that we consider a continuous formulation of the DOPF problem, i.e. one without any discrete decision variables. A non-linear mixed-integer formulation of an OPF is considered, for example, in [19].

Formulation of the DOPF
In order to formulate the DOPF problem for a given time period of interest [0, T ], we consider a uniform partition 0 The power grid consisting of N B nodes denoted by B := {1, . . . , N B } and N E transmission lines denoted by E ⊂ B × B, is described by an admittance matrix The admittance matrix is symmetric, i.e. Y = Y T , and furthermore Y lk = Y kl = 0 if and only if there is a branch connecting node k and l, i.e., (k, l) ∈ E and (l, k) ∈ E. Therefore, Y is a sparse matrix for most real world power grids. The complex voltage at node k ∈ B for time step t ∈ T is given as the complex number with real part E t k ∈ R and imaginary part F t k ∈ R. We define E t := [E t k ] k∈B and F t := [F t k ] k∈B . Moreover, we define the following sets, which number: The resulting vector of variables corresponding to active power and energy for time step t is given by with P t ∈ R N P . In a similar way, the vector of reactive power injections is defined by Every variable related to active and reactive power can be assigned to a specific node of the power grid by means of the power injection and extraction matrices C P ∈ {−1, 0, 1} N B ×N P and C Q ∈ {−1, 0, 1} N B ×N Q [21, p.23]. Since the states of charge (SOC) of storage facilities do not directly influence the power flow equations, the corresponding rows in C P are equal to the zero vector.

Constraints
At first, we consider the equality constraints. We begin with the power flow equations: Denoting the active and reactive power load for time step t by L t p and L t q , respectively, and assigning the loads to the nodes with the help of the matrix C L ∈ {0, 1} N B ×N L , the AC power flow equations are given by [22] where which describe the sum of power flows at node k. Note that we dropped the index t to improve readability. In the following, we abbreviate the equations (5) and (6) with the help of the following "symbolic" equation: Besides the power flow equations (7), one has to take into account an equality constraint for the slack bus, given by F t 1 = 0 for all t, and for generators k, l ∈ DC modelling a DC line kl: Here, line losses are taken into account by means of the factor ν DC ∈ (0, 1). Moreover, an additional set of equality constraints has to be imposed for modelling the state of charge, SOC t i , of storage facilities. These constraints are given by with factors ν l , ν g ∈ (0, 1) describing the efficiency of the loading process P t sl,i and generation process P t sg,i , respectively. Secondly, we take a look at the inequality constraints. Due to technical restrictions arising from power plants, renewable energy sources, storage facilities and DC transmission lines, it's necessary to impose lower and upper bounds for active and reactive power injections: In our application the only bounds, which are time dependent, correspond to the power injected by renewable energy sources, i.e. P t r . Also the node voltages and the active power flow over transmission lines, denoted by p kl , must be constrained: Finally, we impose ramping constraints for conventional power plants to bound the rate of change for injected power between consecutive time steps by α is the ramping factor in percentage per unit time.

Cost Function
The cost function f accounts for expenditure and income related to active power processes for the complete time interval [0, T ] and is composed of contributions from each time step t ∈ T [5]: with f t (P t ) = i∈C (a C,i,2 (P t g,i ) 2 + a C,i,1 P t g,i ) + i∈R (a R,i,2 (P t r,i ) 2 + a R,i,1 P t r,i ) and coefficients a * , * , * ∈ R.

Optimisation Problem
With the definitions of the previous sections at hand, the DOPF problem can be stated in an abstract way: where the vector of optimisation variables is given by Moreover, we used and will use the following definitions: n x,t := 2N B + N P + N Q and n x := N T n x,t and, as of now, let n λ,t be defined as the number of equality constraints corresponding to time step t, i.e. λ t ∈ R n λ,t with λ t = (λ t I , λ t S ) denoting the Lagrangian multipliers corresponding to g t I and g t S , respectively. Analogously, μ t = (μ t I , μ t R ) ∈ R n μ,t denotes the Lagrangian multipliers for the inequality constraints h t I and h t R . Consequently, we define n λ := N T n λ,t and n μ := N T n μ,t . Note that the ramping constraints h R given by (13) and the storage constraints g S given by (9) establish the only couplings between the variables corresponding to different time steps. Without them, a solution to (16) could be obtained by solving N T independent OPF problems. The optimisation problem (16) is non-linear due to the AC equations and due to the voltage and line flow inequality constraints. 1 Moreover, the latter ones and the AC equations are the sources of non-convexity in (16).
To simplify the notation in the following sections, we abbreviate the optimisation problem (16) to with quadratic functions

Primal Dual Interior Point Method for DOPF
In this section, we briefly describe the application of the Primal Dual Interior Point Method (PDIPM) to a DOPF problem. The description is in line with the implementation found in the MATLAB package MATPOWER [21].

The PDIPM Algorithm
For the general optimisation problem (18), the corresponding Lagrangian function is Assuming additional constraint qualifications, e.g., the linear independence constraint qualification (LICQ) [14], for every local minimum x * of (18), there exist corresponding Lagrangian multipliers λ * , μ * such that (x * , λ * , μ * ) are in accord with the Karush-Kuhn-Tucker (KKT) conditions [5]: where s ∈ R n μ is a vector containing slack variables and S ∈ R n μ ×n μ denotes a diagonal matrix whose diagonal elements are S kk = s k . The PDIPM is an algorithm to find solutions to Eq. (21). Mathematically it solves a slightly modified version of this equation by applying to it Newton's method. The modification consists in adding a "term", which decreases with iteration index k. This term keeps the slack variables s away from zero and, thus, the inequality constraints inactive. This means that solutions (x (k) , s (k) , λ (k) , μ (k) ) to this modified version of Eq. (21) are inside the feasible region and not on its surface, hence, the method is called interior point method. More rigorously formulated, we find for a sequence of barrier parameters For details we refer the inclined reader to [5,8] and [21].
Remark A very basic PDIPM is applied, i.e. there is no globalisation strategy.
Since we focus on parallel linear algebra techniques for DOPF problems, a locally convergent algorithm is sufficient to our purpose.

KKT Matrix in PDIPM
The application of Newton's method to Eq. (22) yields iterations in which the following linear systems of equations must be solved: Henceforth we omit the iteration index k. Assuming that s, μ > 0, the Newton system (23) can be transformed into a reduced, symmetric saddle point form with diagonal matrix Σ = diag μ 1 s 1 , . . . , μ n μ s n μ and The KKT matrices A, arising in every PDIPM iteration k, may become more and more ill-conditioned when a KKT point is approached. This is caused by the sequence of scaling matrices Σ (k) . If strict complementary holds at the KKT point [14]. 2 For this reason, many IPM software packages (like IPOPT [7]) use direct methods such as LDL T -factorizations to solve the appearing linear systems. These methods are less sensitive to ill-conditioned matrices.
In contrast, iterative linear solvers like GMRES [15] are very sensitive to illconditioned matrices. A good preconditioner is needed to lower the matrices' condition numbers. In exchange, they offer a higher potential of parallelization and allow the use of inexact Interior Point methods, see Sect. 3.3.
The distribution of the spectrum of a matrix K, defined as is an indicator of the convergence behaviour of GMRES applied to Kv = b. By rule of thumb, a spectrum which is clustered away from 0 is beneficial to the rate of convergence of GMRES. [11,Th. 4.62] For general non-linear optimisation problems with the corresponding KKT matrix the so-called constraint preconditioner is an example of a preconditioner. It is given byK withH being an approximation to H , that is easy to factorize, e.g.,H = diag(H ) [14]. One can show that σ (K −1 K) = {1} ∪σ with |σ | = n x − n λ . Therefore, at most n x −n λ eigenvalues ofK −1 K are not equal to 1. The distribution of these remaining eigenvalues depends on how wellH approximates H on the nullspace of B [10]. 3 In Sect. 4, we describe how to take advantage of the special structure given by (16) to construct a parallel preconditioner. Furthermore, we will see that there is a close relation between our proposed preconditioner and the constraint preconditioner defined above, see Sect. 4.4.

Inexact PDIPM
In contrast to direct methods, iterative solvers allow solving linear systems with prescribed accuracy. One can exploit this fact by means of inexact Interior Point methods. Within these methods, the linear systems corresponding to the first PDIPM iterations are solved with a rather low accuracy and therefore a low number of iterations. As the PDIPM iterate approaches the exact solution, the accuracy of the linear solver is increased. We use the approach from [3] for determining the accuracy in each PDIPM iteration. Here, (23) has to be solved with residualr (k) , i.e, with forcing sequenceη k ∈ (0, 1). With this choice, the update step Δ (k) satisfies Therefore, "γ k +η k can be viewed as forcing sequence of inexact Newton methods" [3]. In our numerical experiments, we setη k = 0.1 for all k and computed the relative tolerance for the iterative solver as follows .
We stopped to decrease the relative tolerance when it reached a value smaller than 10 −10 .

Termination Criteria for PDIPM
We use the following criteria for monitoring the progress of PDIPM [21]: with x − denoting the previous iterate.

Schwarz Domain Decomposition Method for DOPF
The key idea behind Schwarz domain decomposition methods applied to DOPF problems is the decomposition of the set of time steps T into several smaller subsets. Due to this decomposition, it is possible to define corresponding submatrices of the global KKT matrix defined in (24) that are smaller in size and therefore faster to factorize. Furthermore, the factorization of these submatrices can be done in parallel. After computing subsolutions corresponding to these submatrices, one can reconstruct an approximate solution to the global system (24), which is used as preconditioner within a Krylov-type iterative solver. In Sect. 4.1 we introduce the mathematics, which describe the additive Schwarz Method (ASM) in the context of DOPF problems. In Sect. 4.2 we briefly describe some issues related to its implementation. Section 4.3 is intended to provide a deeper insight into the subproblems that have to be solved when the ASM is applied. Finally, we describe the relation between the ASM and the above mentioned constraint preconditioner in Sect. 4.4.

Mathematical Formulation of the Additive Schwarz Method
For the application of the ASM as a preconditioner for the KKT matrix A(x, s, λ, μ) ∈ R (n x +n λ )×(n x +n λ ) defined by (24), it is needed to decompose the set of time steps T = {1, . . . , N T } into q non-overlapping subdomains: Afterward, each subdomainT l is expanded by additional s ol time steps on both ends, yielding an overlapping decomposition of T (see Fig. 1): Henceforward, all expressions involving˜refer to the non-overlapping subdomains T l . When restricting the optimisation variables to the components, which belong to T l , we write The combination of x [l] and λ [l] yields the following definition: This vector contains the components of the solution vector of (24) belonging to the time steps in T l . Moreover, let R l ∈ {0, 1} n l ×n be a restriction matrix corresponding to the subset T l defined by its action: The matrices R x l and R λ l are composed of either zero or identity blocks. As an example, consider the case of l = 1 and l = q: Note that the size of the zero matrices varies with each restriction matrix and each l. Analogously, we defineR l as restriction operator corresponding to the nonoverlapping subdomainT l .
With these definitions at hand, it is possible to extract submatrices A l from the KKT matrix A by multiplying it with R l : In Sect. 4.3, we explore the structure of these submatrices. On the important assumption, that all submatrices A l are non-singular, we are able to formulate the multiplicative Schwarz Method (MSM) for approximately solving AΔ R = b. We do this in the form of an algorithm [18]:

Algorithm 1 Multiplicative Schwarz method
In every iteration l, the current error with respect to the exact solution Δ R , given by By restricting r l−1 to subdomain T l and solving with A −1 l , one obtains an approximationê l := A −1 l R l r l−1 to the exact error R l e l−1 on subdomain T l . Prolongation with R T l yields a correction δ l = R T lê l to the current approximation Δ l−1 R . Thus, the multiplicative Schwarz method can be seen as "defect correction algorithm". 4 Unfortunately, the the MSM is a sequential algorithm. In order to parallelize it, one omits residual updates in each iteration, yielding the ASM [18]: which can be written as The right preconditioned linear system for solving AΔ R = b is then given by Since M ASM is symmetric but in general not positive definite, we use the Generalized Minimal Residual (GMRES) method for solving the non-symmetric system (37). In general, the ASM computes a less accurate approximation to the solution of AΔ R = b and therefore needs an increased number of GMRES iterations compared to the MSM given by Algorithm 1. However, the ASM offers a higher potential of parallelization, which results usually in better parallel scaling. 4 In contrast to "classical defect correction" algorithms, there is no converging sequence Nonetheless, the algorithm's construction implies that Δ q R ≈ Δ R .

Implementation
In our implementation, we use one processor per subdomain T l and distribute A such that every processor storesR l A in its local memory.R l A contains the nonoverlapping portion of rows of A l = R l AR T l . To set up M ASM once, every processor first has to form its local submatrix A l , i.e., in this step the overlapping part of A l has to be communicated to processor l by processor l − 1 and l + 1. Afterward, each processor computes an LU -factorization of A l , i.e., A l = L l U l . This step doesn't involve any inter-processor communication and can be done in parallel.
The employment of the ASM as a preconditioner inside the GMRES method, requires to compute M ASM v (k) in each iteration k. v (k) denotes the k-th basis vector of the Krylov subspace K k (AM ASM , b). On the assumption that the basis vector's data layout is the same as the matrix's one, i.e. every process stores v  (k) . Due to A's data layout and its off-diagonal elements, corresponding to intertemporal couplings, additional communication is required.
One can further improve the performance of ASM by using the so-called restricted version of ASM [4], which is given by For this preconditioner, just the non-overlapping part of the local solution v (k) l is prolonged instead of the entire (overlapping) vector. Experiments show a beneficial behaviour in terms of GMRES iterations compared to standard ASM [4]. Furthermore, prolongation byR l doesn't involve any communication.
To further stabilise the LU -factorization, a small regularisation parameter = 10 −8 is added to the KKT matrix A, i.e. A + I .

Structure of Local KKT Matrices
In this section, we investigate the structure of the local submatrices A l defined in Eq. (36). A good way to get an idea of their structure is to permute the KKT matrix A in such a way that all rows and columns belonging to one time step are grouped together. This yields a block diagonal matrix with some off-diagonal elements. Every block represents one time step and the off-diagonal elements arise due to the time-dependent constraints. Extracting the time steps belonging to the subdomain T l results in a permuted version of the submatrix A l . 6 Its structure is exactly the same as the one of the original KKT matrix A. If the off-diagonal elements are taken into account as "boundary conditions", it is possible to interpret the submatrix A l as representing an optimisation problem in its own, i.e. the original dynamical OPF problem is reduced to the subdomain T l . Figure 2 tries to demonstrate this. As a consequence the submatrices A l tend to be ill-conditioned.

Relationship Between the ASM and the Constraint Preconditioner
To see a relation between the additive Schwarz method and the constraint preconditioner, as introduced in Sect. 3.2, it is necessary to consider a dynamical OPF problem without time-dependent equality constraints. In our case, we had to drop the storage facilities g t S to do this. Furthermore, it is important to consider a nonoverlapping decomposition of the time domain, i.e. s ol = 0. In this case, the corresponding ASM preconditioner, reduces to a block Jacobi method with omitted couplings between variables belonging tot + l andt − l+1 for each subdomain l. Since there are only time-dependent inequality constraints left, the coupling between the variables belonging to different time steps has to arise in the (1, 1)-block of the KKT matrix, namely in the (∇h) T Σ(∇h) part. This means that multiplyingM ASM with b is like solving a modified KKT system, or equivalently, H denotes the KKT matrix's modified (1, 1)-block. Thus,M ASM has the form of a constraint preconditioner. At this point, it also becomes clear that the ASM can only be interpreted as a constraint preconditioner, if there are no time-dependent equality constraints. If there existed time-dependent equality constraints, the (2, 1)-and (1, 2)-block of M −1 ASM would have changed too. As pointed out in Sect. 3.2, 1 is an eigenvalue ofM ASM A of multiplicity 2n λ and the remaining n x −n λ eigenvalues are solutions of a generalized eigenvalue problem of the following form [10]: For a low number of subdomains q, one can expectH to be a good approximation to H , leading to a clustered spectrum σ (M ASM A) close to one. However, the more subdomains, the more neglected couplings and the less accurate doesH approximate H . This results in a more scattered eigenvalue distribution ofM ASM A, but still a large number of eigenvalues are equal to 1.

Numerical Experiments
In this section, we present some results for our proposed method applied to a DOPF problem. In its first part, we discuss the parallel speedup of our solver. In the second one, we investigate the way in which the KKT matrix's spectrum σ (A) changes with the iterations of the interior point method. The test grid, which we used for our experiments, represents a possible variant of the German Transmission Grid in the year 2023 and consists of 1215 nodes, 2247 AC lines, 8 DC lines, 181 conventional power plants, 355 renewable energy sources and 67 storage facilities. For more details, we refer the reader to [12]. For solving (16), we use the PDIPM algorithm mips which is written in MATLAB code and part of MATPOWER [21]. In this algorithm, we replace the standard MATLAB backslash operator \ for solving linear systems by our own linear solver. Our solver applies a right preconditioned GMRES method. We use the ASM as the preconditioner. For computing the LU -factorizations of the local systems A l , we use the software package Intel R MKL PARDISO. Our solver is written in C++ and makes use of the KSPGMRES and PCASM methods provided by PETSc [2], which is compiled in Release mode with the Intel compiler 17.0. The computation of the eigenvalues of the KKT matrices and the preconditioned KKT matrices have been done with SLEPc [9] and MATLAB. Inter-process communication is realised by means of the Message Passing Interface (MPI). The numerical experiments were carried out on the BwForCluster MLS&WISO Production at Heidelberg University with two Intel Xeon E5-2630v3 processors i.e. 16 CPU cores at 2.4 GHz per node, and 64 GB working memory.

Parallel Speedup
In our numerical experiments, we concatenated the 48 h load profile, which was part of the described test grid, to obtain a test case comprising 48 days with a temporal resolution of 1 h, i.e. N T = 960 time steps. This causes the linear system (24), which has to be solved in every PDIPM iteration, to have the dimension n x + n λ ≈ 6.168 · 10 6 . Furthermore, we set α = 0.4, which led to ramping constraints limiting the change of power generation of conventional power plants by 40%/h of their respective maximum power generation.
We set f eas = grad = cost = comp = 10 −5 as PDIPM termination criteria and limited the number of PDIPM iterations to 300. The number of GMRES iterations was restricted to 1500. The GMRES method was allowed to restart after 300 iterations. Its relative residual tolerance rtol was determined as described in Sect. 3.3.
The ASM method was not restricted and the overlap s ol was set to 3 for all tests. The parallel speedup on q processors for the first k PDIPM iterations is defined as s q (k) := T 1 (k) T q (k) . We computed the sequential reference time T 1 (k) by solving the linear systems (24) using a LU -factorization and adding up the times needed to solve them up to the k-th PDIPM iteration. MKL PARDISO's sequential LUfactorization algorithm was applied.
Analogously, T q (k) was obtained by adding up the times needed to solve the same linear systems, but this time iteratively, i.e. the GMRES method and the ASM were used. In Fig. 3 we plotted the results of our speedup computations for different numbers of processors. The dashed graph represents the speedup obtained when the KKT systems (24) are solved directly with MKL PARDISO's parallel implementation of the LU -factorization using 16 processors. For this amount of processors we observed the best speedup.
Besides, we scaled the y-axis logarithmically to ease drawing a comparison with the LU -solver and to emphasise the large speedups during the first iterations. 7 We observed for our proposed iterative solver, no matter the number of processors q, a clear speedup in comparison with the LU -solver. The parallelization, due to the ASM, shows its power during the first PDIPM iterations. Unfortunately, this initial speedup decreases rapidly. After 40 iterations we end up with a total speedup of five. This decrease is accompanied by an increase in GMRES iterations. To demonstrate this, we plotted in Fig. 4 the number of GMRES iterations needed to solve the KKT system at the k-th PDIPM iteration. The first thing to note is, that the number of GMRES iterations does not only change with PDIPM iterations, but also with the number of processors used to solve the KKT systems. It's likely that the increase in GMRES iterations with PDIPM iterations is caused by a change in the spectrum of the original KKT matrix A. The increase with the number of processors can be explained as follows: The more proccesors we use, the smaller the subproblems, described by A l , become and, hence, the more intertemporal couplings are neglected, which reduces the "approximity" of M ASM to A −1 . And the closer M ASM is to A −1 , the more the spectrum of the preconditioned operator σ (AM ASM ) is clustered around 1.
Before we start to actual investigate the spectrum of the matrices in the next part of this section, we would like to summarise our findings. Even though we observed a rapid decrease in the speedup, it was possible to obtain a moderate speedup with only 16 processors and in comparison to a standard parallel LU -factorization also using 16 processors, our solver was more than a factor 2 faster.

Eigenvalue Distribution
Since the GMRES method converges faster if the eigenvalues of matrix A are clustered around 1, i.e. it needs less iterations, we would like to investigate how much the ASM, applied as preconditioner, improves the eigenvalues' distribution. Therefore, we calculated the eigenvalues with the largest and smallest magnitude of the KKT matrix A and the preconditioned KKT matrix AM ASM , respectively. We write λ A max = max{|λ| : λ ∈ σ (A)} and λ A min = min{|λ| : λ ∈ σ (A)}. Seeing that it takes very long to compute, or that it is even not possible to compute the eigenvalues for the complete N T = 960 time steps, we decided to calculate them for N T = 96 time steps and for each PDIPM iteration. We set the overlap to s ol = 1 and used q = 32 subdomains. The results are presented in Fig. 5. The red lines depict the development of λ A max and λ A min of the KKT matrix, respectively. Thus, they form the limits for all possible magnitudes of eigenvalues corresponding to A. The latter is indicated by the red shaded area. Analogously, the blue lines and the blue area represent the spectrum of the preconditioned KKT matrix.
The spectrum of the preconditioned KKT matrix, is clustered around 1 in every interior point method iteration. Looking at the magnified part of the graph shows

Conclusion
In this work we proposed a way of solving linear systems arising from Dynamic Optimal Power Flow (DOPF) problems in parallel by means of an overlapping Schwarz domain decomposition method, namely the additive Schwarz method. It was shown how to apply this method in the context of DOPF problems and that the obtained submatrices correspond to localised formulations of a DOPF problem with additional boundary conditions. Numerical tests on one large-scale DOPF problem showed that the combination of the Generalized Minimal Residual (GMRES) method and the Additive Schwarz Method (ASM) is able to solve DOPF problems. Furthermore, we were able to show that this combination yields good parallel speedup for some of the PDIPM iterations. And, in a small example, we demonstrated that the ASM is a good preconditioner in the sense, that it clusters the eigenvalues around 1.
Unfortunately, the proposed solver had problems with some of the KKT systems arising in the large-scale DOPF problem. Therefore, it is necessary to improve it and this will be one of our main goals in future work.