Introduction

In a traditional electric power system, vertically integrated utilities (VIU) owned generation, transmission as well as distribution networks and supplied power to the customers at regulated rates. Deregulating power systems transformed the VIU-dominated system into open energy market system and enhanced the economic efficiency. The market participants who provide energy services in this open energy market have become more competitive. This system consists of generation companies (GENCOs), distribution companies (DISCOs), and transmission companies (TRANSCOs). Apart from these companies, Independent System Operator (ISO) have been introduced to implement a secure and economical operation of power systems in coordination with all other companies in a deregulated power system [1, 2].

In this newly emerged structure, DISCO has the freedom to purchase the power from any GENCO, located within or outside of control area. The ISO is independent and a disassociated agent for market participants. In the open energy market, all the transactions are done under the supervision of the ISO. There are various ancillary services which are controlled by the ISO to provide secure, reliable and economical power transmission [3]. AGC is one of ancillary services of ISO where ACE is used as a control input for AGC operation. Different issues and analysis on power system models based on deregulated environment have been addressed by researchers in [1, 2, 4,5,6,7,8,9].

The emergence of the deregulation era in power system has focused research on different arms of it. Christie et al. [2] described several possible structures for AGC in an open market and also addressed technical issues in power system operation after deregulation. Two different approaches for AGC based on HVDC-link and ramp-following controller were introduced by Bakken et al. [7] for Norway and Sweden interconnected power system in a competitive environment. A detailed simulation and optimization has been carried out by Donde et al. [1] for AGC systems after liberalization. The concept of DISCO participation matrix has also been shown for different types of contracts and trajectory sensitivity has also been explored. Yao et al. [10] developed an AGC logic based on North American Electric Reliability Council’s (NERC) new standards. In the old NERC standard, the Control Performance Criteria (CPC) was used to measure their control performance. But, due to the lack of technical justification, NERC replaced the CPC with the new CPS and Disturbance Control Standard (DCS). Simulation results indicate significant reductions in the number of pulses and pulse reversals sent to controlling units and it was found convenient for controlling the degree of compliance, while minimizing unit control.

Bhatt et al. [8] proposed a model for AGC in the restructured power system with the concept of DISCO participation matrix to simulate the bilateral contracts in three and four areas. Hybrid particle swarm optimization was used to obtain optimal gain parameters for optimal transient performance. Roy et al. [9] also studied the four-area multi-units AGC in a restructured power system. A chaotic ant swarm optimization and real coded GA were used to obtain optimal gain parameters for optimal transient performances. Mosaad et al. [11] presented an adaptive PID based LFC using Neuro-Fuzzy Inference Systems (ANFIS) and Artificial Neural Networks (ANN) technique.

Literature on deregulated environment mostly considers thermal and hydro units with linearized models with classical control techniques. In most of the earlier works, AGC pertain to interconnected thermal-thermal systems, mostly with non-reheat type turbines. Relatively lesser attention has been devoted to the AGC with interconnected non-conventional sources but in recent years, several control techniques based on optimal, intelligent and robust approaches have been proposed for AGC in deregulated power systems.

Optimization in frequency regulation problem

Ghoshal et al. [12] determined GA based optimal gains for interconnected power system to get better optimal performance. Apart from this, sugeno fuzzy logic technique for on-line adaptive integral gain scheduling was also explored for AGC. Ghoshal [13] explored GA as well as hybrid genetic algorithm-simulated annealing techniques, to obtain optimal integral gains for integral controller, and gains for PID controller for interconnected power systems. Ghoshal [14] had also used various novel heuristic stochastic-search techniques for the optimization of PID controller gains. Classical particle swarm optimization, hybrid particle swarm optimizations and hybrid genetic algorithm simulated annealing were also explored to obtain optimal gains of PID controller [15]. Al-Hamouz et al. [16] used GA for the selection of variable structure controller feedback gains, that was presented for AGC of a single area power system.

The PSO is a population based stochastic nature inspired optimization technique, inspired from social behavior of bird flocking. Sharifi et al. [17] designed a multi-objective PID controller for AGC, based on adaptive weighted particle swarm optimization, for two area interconnected power system. In this approach, peak overshoot/undershoot and settling time were used as objective functions for multi-objective optimization. Similarly, Bhatt et al. [18] explored the optimal PID gains for AGC using hybrid particle swarm optimization for interconnected power systems.

Some researchers used other evolutionary optimization technique to control frequency regulation. Naidu et al. [19] proposed multi-objective based optimization of artificial bee colony (ABC) algorithm for AGC on a two area interconnected reheat thermal power system, controller’s gains had been selected to provide a compromise between the frequency response’s settling time and maximum overshoot. Gozde et al. [20] examined ABC algorithm for AGC application, ABC algorithm was applied to the interconnected reheat thermal power system in order to tune the parameters of PI and PID controllers. Ali et al. [21] employed bacterial foraging optimization algorithm (BFOA) to find the parameters optimization of nonlinear AGC considering PID for a power system. Effectiveness of the proposed BFOA was validated over different operating conditions, and system parameters variations. Panda et al. [22] proposed hybrid bacteria foraging optimization algorithm and particle swarm optimization algorithm for AGC of an interconnected power system, PI controller was considered for AGC controller. Ibraheem et al. [23] proposed genetic algorithm-simulated annealing (GASA) optimization techniques for interconnected power system, in which gains of fuzzy logic based AGC tuned by GASA techniques. Shankar et al. [24] proposed Fruit fly algorithm for interconnected power system. Sinha et al. [25] also proposed Fruit fly algorithm for interconnected power system. Saha et al. [26] proposed stochastic fractal search optimized classical controllers for AGC. Apart from this, several evolutionary optimization technique used for AGC [27,28,29,30].

Several researchers have explored different evolutionary optimization techniques for AGC, in which PSO is one of recent optimization techniques that has potential to address such complex problems i.e. AGC for interconnected power system. There are different PSO variants explored by researchers, in which six different potential variants are identified in this manuscript, for which simulation is carried out to identify best PSO variant suitable for PID controller optimization, which can provide optimal results for various dynamic situation.

Development of Supplementary Controller for Interconnected Power System

The increasing complexity and size of electric power system as well as increasing power demand has made the situation more complex and challenging [31,32,33]. In each area, supplementary controller monitors the system frequency and tie-line power flows and based on these changes, the output of the generators within the area will be reset. As ACE is driven to zero, both frequency and tie-line power errors will be forced to zeros. This supplementary controller is viewed as a supervisory control function for matching the generation to the load demand.

Generally, supplementary controller comprises of a conventional controller such as, integral controller, PI controller and PID controller. Even when these controllers are employed in an interconnected power system, the system performance does not improve significantly. This is because fixed gain controllers are no longer suitable for frequently changing loads, especially when the numbers of interconnections are increased and non-linearities are employed, the situation become worse. Different approaches used for supplementary control are as follows.

Proportional Integral Derivative Controller

In the last few decades, the PID controller is utilized extensively for industrial processes because of their robustness and simplicity. The transfer function of the standard PID controller (is also known as the “three-term” controller) [34, 35], is given by,

$$U\left(s\right)={K}_{p}+\frac{{K}_{i}}{s}+{K}_{d}s$$
(1)

or

$$U\left(s\right)={K}_{p}(1+\frac{1}{{T}_{i}s}+{T}_{d}s)$$
(2)

where Kp is the proportional gain, Ki is the integral gain, Kd is the derivative gain, \({T}_{i}\) is the integral time constant and \({T}_{d}\) is the derivative time constant.

Controller parameters are tuned such that the closed-loop control system is stable and meets user defined objectives. There are different types of tuning methods for the PID controller based like analytical methods, heuristic methods, objective based optimization methods and frequency-based methods. In this work, there are mainly two methods used for obtaining PID parameters first is classical approach where Ziegler-Nichols tuning method is used the second is different objective function-based optimization technique.

Particle Swarm Optimization

PSO algorithm was first given by Kennedy and Eberhart in 1995 [36], is a stochastic heuristic population based optimization method, which is based on swarm intelligence. It originated from the idea coming from research on birds and fish flock movement behavior. This algorithm is widely used for many applications because of its easy implementation and the fact that only few parameters need to be tuned. It converges to a global solution within a faster time in comparison to other stochastic optimization methods like GA and simulated annealing.

The basic idea of the PSO is that when birds are moving in search of food from one place to another, there is always one bird that moves very close to the food or has information about good food. Then, other birds eventually flock to the place where food can be found and their movement is inspired by their own best known position as well as the best known position of the flock. As far as the PSO algorithm is concerned, each bird position is compared to the best known position of the swarm and also to its own best known position. The birds’ next move from place to place is root for development of the solution and a good position is equal to the most optimized solution [37].

PSO algorithm consists of ‘n’ particles that represent potential solutions. Each particle is represented by its current position ‘x’ and current velocity ‘v’ in an ‘m’ dimensional space. Wherever there is an iterative process, the position gets updated based on the following rules:

  1. 1.

    Inertia.

  2. 2.

    Best known individual position.

  3. 3.

    Best known swarm position.

$${v}_{i}^{m}\left(iter+1\right)=w \ast {v}_{i}^{m}\left(iter\right)+{c}_{1} \ast {R}_{1}*\left({pbest}_{i}^{m}\left(iter\right)-{x}_{i}^{m}\left(iter\right)\right)+{c}_{2} \ast {R}_{2}*\left({gbest}^{m}\left(iter\right)-{x}_{i}^{m}\left(iter\right)\right)$$
(3)
$${x}_{i}^{m}\left(iter+1\right)={x}_{i}^{m}\left(iter\right)+{v}_{i}^{m}\left(iter+1\right)$$
(4)

Where,

  • \(iter\) Iteration number

  • \(i\) Particle index

  • \(m\) Dimension

  • \({v}_{i}^{m}\) Velocity of \({i}^{th}\) particle in \({m}^{th}\) dimension

  • \({x}_{i}^{m}\) \({i}^{th}\) particle position in \({m}^{th}\) dimension

  • \({gbest}^{d}\) Swarm global best position in \({m}^{th}\) dimension

  • \({pbest}_{i}^{d}\) Particle best position of \({i}^{th}\) particle in \({m}^{th}\) dimension

  • \(w\) Momentum

  • \({c}_{1}\)\({c}_{2}\) Acceleration constants

  • \({R}_{1}\)\({R}_{2}\) Random numbers with uniform distribution [0, 1]

Flowchart of PSO optimization is presented in Fig. 1.

Fig. 1
figure 1

PSO flowchart of optimisation process

Different Variants for PSO

After the initial development of the PSO by Kennedy and Eberhart, several variants of the PSO algorithm have been proposed by researchers [38]–[39]. Various inertia weighting strategies as well as acceleration-constant weighting strategies have been developed. These strategies monitor the situation of the particles in the search space and based on a feedback parameter, weights are adjusted.

For all the algorithms, upper and lower bounds of the PID controller gain values have been set by keeping 1.5 times and 0.1 times of Zeigler Nichols method based gain values. This step is essential because if the gain values of PID controller are out of bounds, it would lead to instability of the system. In this work, the swarm size has been taken as 20 and the number of iterations has been taken as 200. Accelerating constant C1 and C2 were taken as 1.2 and 0.12 respectively and positions and velocities were randomly initialized.

Algorithm 1 – Success Rate

Nickabadi, Ebadzadeh and Safabakhsh [38] first delineated two situations occurring during the course of a PSO. Both cases initially consider two particles, but can be extrapolated for many particles. The first situation is where both the particles are far from the optimum but close to each other. This necessitates a larger velocity to ensure fast convergence to the optimum. In the second situation, the two particles are near the optimum, but are far from each other. A large inertia weight in later case would lead to oscillations about the optimum without convergence, requiring a smaller velocity of particles. With these two cases, they have described ‘near’ and ‘far’ from optimum differently from what is used conventionally. A particle close to the optimum but with high velocity will take time in reaching optimum point and hence, is considered to be far [38]. Not considering the particles’ fitness value as a direct feedback factor in determining inertia weight, they instead proposed a percentage success method for evaluating inertia weight. A high success rate is an implication that the particles have converged at one point and are moving towards the global optimum together. This falls under the first scenario and requires a larger velocity and hence, a larger inertia weight. A low success rate means that the particles are moving about the optimum but not converging at it. A smaller velocity would help in faster convergence. As a result, the inertia weight is varied as a direct factor of the success percentage ‘\(S\left(i,t\right)\)’ for each particle.

$$S\left(i,t\right)=\left\{\begin{array}{lc}1\;if\;F\;\left(pbest_{i,(t)}\right)\;<&F\;\left(pbest_{i,(t-1)}\right)\\0&otherwise\end{array}\right.$$
(5)
$$Ps= \frac{\sum _{i=1}^{n}S(i,t)}{n}$$
(6)

Ps’ is the percentage of particles that have a fitness values better than the previous iterations. The inertia weight is then updated as a linear function of ‘Ps’.

$$w=\left({w}_{\text{max}- }{w}_{\text{min} }\right)Ps+ {w}_{\text{min} }$$
(7)

In this work, \({\text{w}}_{\text{max} }\) and \({\text{w}}_{\text{min} }\)have been taken 0.9 and 0.4 respectively.

Algorithm 2– Evolution Speed and Aggregation Degree Factors

Yang et al. [40] proposed an adaptive weight strategy using the evolution speed factor and aggregation. Bergh et al. [41] discussed two definitions of convergence was that all the particles assume the same value as time tends to infinity and the second was that global best values of iterations tend to a final overall global best value at infinite time. They further said that the classical PSO follows the first definition and to satisfy it, the velocity of particles becomes zero which leads to the optimization process being stuck in the local optima. The motivation behind the new adaptive method is to ensure that when the optimum solution is close by, it is vital to slow down the particles and allow them to perform an intensive search. If the particles do not near the optimum solution, they need to spread out and cover larger areas of the search space. The evolutionary speed and the aggregation degree factor as follows were proposed to include this.

Evolutionary speed factor:

$${h}_{it}= \left|\frac{\text{min}\left( F\right(pbes{t}_{i,\left(t-1\right)}, pbes{t}_{i,\left(t\right)})}{\text{m}\text{a}\text{x}\left( F\right(pbes{t}_{i,\left(t-1\right)}, pbes{t}_{i,\left(t\right)}\left)\right)}\right|$$
(8)

where \(\text{p}\text{b}\text{e}\text{s}{\text{t}}_{\text{i},\left(\text{t}\right)}\) is the local best fitness of the ith particle till the tth iteration. For every dimension, the value of ‘\({h}_{it}\)’ remains the same because the fitness values are common across all dimensions. This factor keeps into account the previous runs of each particle. A smaller value of ‘\({h}_{it}\)’ would imply that the particles’ new velocity is greatly different from its previous velocity, so the particles move faster, in this case. For the purpose of our paper, the fitness values are of the same sign, so, the modulus sign can be discarded.

The aggregation degree factor:

$${s}_{\left(t\right)}= \left|\frac{\text{m}\text{i}\text{n}(Fbes{t}_{\left(t\right)}, Fav{g}_{\left(t\right)})}{\text{m}\text{a}\text{x}(Fbes{t}_{\left(t\right)}, Fav{g}_{\left(t\right)})}\right|$$
(9)

Where \(\text{F}\text{a}\text{v}{\text{g}}_{\left(\text{t}\right)}\) is the average of all the fitness values in the tth iteration and \(\text{F}\text{b}\text{e}\text{s}{\text{t}}_{\left(\text{t}\right)}\) is the best fitness value (minimum fitness value) among all the fitness values in that iteration. Note that this value is different from the global best fitness value because the latter is applicable from the first to the tth iteration, but the former applies only to the tth iteration. If this value is nearing one, it means the average fitness values are nearing the minimum of that particular iteration, and there is a risk of having particles stuck in local optima. As a result, a higher value of \({s}_{\left(t\right)}\) would necessitate a larger value of velocity to prevent local trapping. If this value is smaller, it is necessary to slow down the particles and perform a more intensive search in a smaller area.

With these two factors, Yang et al. [40] was proposed to modify the inertia weight as a function of evolutionary speed factor ‘\({h}_{it}\)’ and aggregation degree factor ‘\({s}_{\left(t\right)}\)’ is:

$${w}_{i,\left(t\right)}=wi{n}_{i}- \propto \left(1-{h}_{i,\left(t\right)}\right)+ \beta {s}_{\left(t\right)}$$
(10)

Where \(\propto\) and \({\upbeta }\) are two constants, in the range of [0,1] in our work a fixed value 0.2 have been taken for both.

Algorithm 3– Global-local Best Inertia Weight

Kennedy and Eberhart discussed the five principles of swarm intelligence, are proximity principle, the quality principle, the principle of diverse response, the principle of stability and the principle of adaptability [42]. Arumugam and Rao [43] compared the constant inertia weight strategy and linearly decreasing inertia weight strategy with their proposed method. Then, they proposed an algorithm that takes global and local best fitness values into account, satisfying the above principles also. The modification is

$$w=1.1 -\frac{F\left(gbest\right)}{F\left(avg\left(pbes{t}_{i}\right)\right)}$$
(11)
$$F\left(avg\left(pbes{t}_{i }\right)\right)= \frac{\sum _{i=1}^{n}F\left(pbes{t}_{i}\right)}{n}$$
(12)

This inertia weight is used for all the particles in a single iteration and gets updated with the global best and local best fitness values. A greater difference in global and local fitness values will imply that particles are not close to the global best and it will result in a larger value of inertia weight. This will facilitate a larger search space to be searched due to an increase in velocity. When the average of the fitness values of the particles is near the global best fitness value, the search becomes more concentrated and the inertia weight is lowered. Having the ratio subtracted from 1.1 ensures that the inertia weight never becomes zero, even in case when the global best fitness value equals the average value. This inertia weight is then incorporated in the velocity equation. However, there could be a possible problem that arises by adopting this method. This method does not have a way of jumping out of the local optima, in case the particles get stagnated. This is evident from the formula where once the particles achieve a fitness value similar to the global best, the inertia weight ceases to change drastically. This can lead to premature convergence if the global optimum is not actually the minimum value of the objective function.

Algorithm 4 – Distance from Global Best

An adaptive inertia weight strategy along with a modification in the position-updating equation is proposed in [44] by Suresh et al. In order for particles to not lose explorative ability, it is important to not let them get stuck in the local optima. If the velocity of a particle is small, and the fitness value difference between global best, local best and a particle’s current value is not significantly different, then the new velocity updated would change slightly compared to the previous one and this can lead to that particle being stuck in local optima. This can especially happen with the global best particle where the social and cognitive factors of the velocity update equation reduce to zero and inertia weight and velocity get damped quickly, without yet attaining a minimum fitness value. To prevent this from happening, the strategy proposed in the work, is

$$w={w}_{0}( 1-\frac{distanc{e}_{i}}{maxdistance} )$$
(13)
$$distanc{e}_{i}= \sqrt{{\sum_{d=1}^{D}(gbes{t}_{d}-{x}_{i,d})}^{2}}$$
(14)
$$maxdistance=\text{m}\text{a}\text{x}\left(distanc{e}_{i}\right)$$
(15)

w0 is a random value in between 0.5 and 1. This ensures that the attraction to the global best will dominate when the particles move farther away from it.

The velocity update equation is given by:

$${x}_{i}=\left( 1-\rho \right){x}_{i (t-1)}+{v}_{i}$$
(16)

\(\rho\) is a random value between the ranges of -0.25 to 0.25. This method prevents loss of diversity by enabling the particles to not lose their explorative ability.

Algorithm 5 – Fixed Inertia Weight

In this standard method, a fixed inertia weight parameter introduces into the original PSO method. This algorithm is proposed by Shi and Eberhart in [45].

$${v}_{ij(t+1)}={w \ast v}_{ij,t}+{c}_{1} {r}_{1ij} \left(pbes{t}_{ij}-{x}_{ij,t}\right)+{c}_{2} {r}_{ij} (gbes{t}_{j}-{x}_{ij,t})$$
(17)
$${x}_{ij(t+1)}={x}_{ij,t}+{v}_{ij(t+1)}$$
(18)

Here, the inertia weight is kept at a constant value 0.9.

Algorithm 6 – Varying Acceleration Coefficients

So far, we have seen the inertia weights are being adapted according to different conditions. Ratnaweera et al. [46] have emphasized on the variation of acceleration coefficients to ensure good explorative and exploitative capabilities. This method has been used in other applications such as congestion management in power systems [39]. As mentioned in the introductory paragraph, c1 is the cognitive factor and is a representation of particles to its own best solution while c2 is a social factor which represents a particle’s attraction to the global best of the entire swarm. As Kennedy and Eberhart have discussed in [42], a high cognitive learning factor leads to individual particles moving about randomly in the search space while a high social learning factor can lead to a premature convergence to local optimum. Particles need to explore extensively first and then exploit intensively later. A large cognitive coefficient and a small social factor initially will allow particles to fly about the search space. Lowering the former and increasing the latter over iterations will allow convergence to the optimum solution. For this, c1 and c2 are varied in a linear fashion.

$$c{}_{1}=(\left({c}_{1f}-{c}_{1i}\right) \ast iter \ no.\frac{1}{totaliteration})+{c}_{1i}$$
(19)
$$c{}_{2}=(\left({c}_{2f}-{c}_{2i}\right) \ast iter \ no.\frac{1}{totaliteration})+{c}_{2i}$$
(20)

Since c1 is linearly decreased and c2 is linearly increased, taking c1i = 2.5, c2i = 0.5, c1f = 0.5 and c2f = 2.5 accomplishes this, together with a constant inertia weight of 0.9.

Results and Discussion

In previous section, different controllers and tuning method have been discussed, in which more emphasis is given on different variants of PSO optimization. Initially different sets of simulations are carried out to find best controller among PI controller and PID controller, which provide better dynamic response. In order to find the effectiveness of these controllers, different step load change scenarios are also considered for two area interconnected power system.

Figure 2 shows frequency deviation for area-1, area-2 and tie-line power deviation for different controllers at step load change (in area-1 0.01 p.u. and area-2 0.02 p.u.), it shows that PSO optimized PID controller gives overall better results than PSO optimized PI controller and ZN tuned PID controller. ZN tuned PI controller doesn’t give stable results, so for that case further analysis not considered. Performance indices to analysis dynamic response shown in Table 1. Gain values for different controllers obtained from ZN tuning or PSO optimization method are shown in Table 2. Here success rate based on PSO optimization algorithm (further named as Algo-1) was used for optimization of PI and PID controller.

Fig. 2
figure 2

Frequency deviation for area-1, area-2 and tie-line power deviation for different controllers

Table 1 Performance indices for different controllers
Table 2 Gain values for different PI and PID controllers

Further, in order to find the effectiveness of PSO optimized PID controller, it is compared against ZN tuned PID controller and PSO optimized PI controller. Three different step load scenarios are considered, which are following:

  1. 1.

    0.01 p.u. step load for Area-1 and Area-2

  2. 2.

    0.02 p.u. step load for Area-1 and Area-2

  3. 3.

    0.03 p.u. step load for Area-1 and Area-2

Response from different controllers for above mentioned load scenarios are shown in Fig. 3, performance indices for same are quantified in Table 3 for different controllers.

Fig. 3
figure 3

Frequency deviation for area-1, area-2 and tie-line power deviation for different controllers at different step load

Table 3 Perfromance indices for different controllers at different step load

Further, having chosen PSO, its six different variants are examined for performance analysis. In this thorough examination of PSO variants, PID controller is selected as AGC controller for two area interconnected power system and gains for PID controllers for both areas are optimized.

In order to inspect these different PSO variants five diverse cases have been considered, in which five different system parameters considered for variations, namely are frequency bias coefficient (β1 and β2) of each control area, generator’s time constant (Tp1 and Tp2) of each control area and synchronization coefficient (T12). These cases are:

  1. a.

    Standard Case (System parameters is equal to standard values)

  2. b.

    N15 Case (System parameters are 15% less)

  3. c.

    N30 Case (System parameters are 30% less)

  4. d.

    P15 Case (System parameters are 15% more)

  5. e.

    P30 Case (System parameters are 30% more)

In order to optimize the PID controllers’ gains, each algorithm have been run for 200 iteration and swarm size is taken 5 times than no. of variables and other parameters values are given in appendix. Since PSO is a stochastic optimization technique, during every run it may not provide same global optimum solution. In order to examine this, for standard case these different PSO variants are made to run for 5 times and up to 200 iterations and out of these 5 runs only best run output is considered for comparison with other optimization algorithm. Different optimization curves for best fitness with respect to iteration are shown in Fig. 4 for standard case, it is found that algorithm 1 (Algo 1) converges faster than other algorithms and the best fitness value is also obtained in Algo 1.

Fig. 4
figure 4

Best fitness value with respect to iteration for different algorithm for standard case

For other four cases also, different PSO variants have been run for 5 times and for 200 iterations. Different optimization curves for best fitness with respect to iteration are shown in Fig. 4 for N15, N30, P15 and P30 case respectively. In these all cases, it is found that algorithm 1 (Algo 1) converges faster than other algorithms and the best fitness value among all these algorithms is also found better in Algo 1. PID controller’s gain values for both area-1 & area-1, obtained from different variants of PSO optimization algorithm shown in Table 4.

Table 4 Gain values of optimized PID controllers obtained from different algorithms for different cases

In order to justify effectiveness of these different PSO optimization techniques, different performance indices are examined for frequency deviation for both areas and tie-line power deviation at sudden load perturbation. Frequency deviation for area-1, area-2 and tie-line power deviation for standard case are shown for different algorithms in Figs. 5, 6 and 7 respectively. Different performance indices for frequency deviation in area-1, frequency deviation in area-2 and tie-line power deviation at sudden load perturbation for all stated cases are shown in Figs. 8, 9 and 10. These figures show that Algo 1 is best among all these different PSO variants. Table 5 shows best fitness value, mean fitness value and deviation of mean fitness and best fitness value for each algorithm while Tables 6, 7 and 8 show numerical values in order to quantify results. Based on these results it is concluded that Algo 1 is better than other 5 variants in terms of faster convergence and best fitness value.

Fig. 5
figure 5

Frequency deviation for area-1 for different algorithm for standard case

Fig. 6
figure 6

Frequency deviation for area-2 for different algorithm for standard case

Fig. 7
figure 7

Tie-line power deviation for different algorithm for standard case

Fig. 8
figure 8

Performance indices for deviation in frequency of area-1 for different algorithms for different cases of variation in system parameters

Fig. 9
figure 9

Performance indices for deviation in frequency of area-2 for different algorithms for different cases of variation in system parameters

Fig. 10
figure 10

Performance indices for deviation in tie-line power for different algorithms for different cases of variation in system parameters

Table 5 Best fitness, mean fitness and % deviation of best and mean fitness for different algorithms for different cases of variation in system parameters
Table 6 Performance indices for deviation in frequency of area-1 for different algorithms for different cases of variance in system parameters
Table 7 Performance indices for deviation in frequency of area-2 for different algorithms for different cases of variance in system parameters
Table 8 Performance indices for deviation in tie-line power for different algorithms for different cases of variance in system parameters

Conclusions

In this paper, various variants of PSO have been examined for AGC of interconnected thermal-hydro power systems in deregulated environment. The controller performance is observed on the basis of dynamic parameters i.e. settling time and peak overshoot. The simulation results show that PSO algorithm 1 (success rate) based optimized PID controller provides a better performance when compared with other PSO variants based optimized PID controller in settling time and peak undershoot. The effectiveness of the successive rate based PSO variant is also examined over 10 runs, in which it gives best result as well as mean and deviation from best also least among six different variants of PSO. This justifies that successive rate based PSO variant converged in faster way to optimum point.