A quantum-based sine cosine algorithm for solving general systems of nonlinear equations

Abstract

In this paper, a quantum-based sine cosine algorithm, named as Q-SCA, is proposed for solving general systems of nonlinear equations. The Q-SCA hybridizes the sine cosine algorithm (SCA) with quantum local search (QLS) for enhancing the diversity of solutions and preventing local optima entrapment. The essence of the proposed Q-SCA is to speed up the optimum searching operation and to accelerate the convergence characteristic. The proposed Q-SCA works in twofold: firstly, an improved version of SCA based on tuning the search space around the destination solution dynamically, so that the search space is shrunken gradually as the optima are attained. In addition, a new mechanism to update the solutions is introduced using bidirectional equations. Secondly, QLS is incorporated to improve the quality of the obtained solutions by the SCA phase. By this methodology, the proposed Q-SCA can achieve high levels of exploration/exploitation and precise stable convergence to high-quality solutions. The performance of the proposed algorithm is assessed by adopting twelve systems of nonlinear equations and two electrical applications. Furthermore, the proposed Q-SCA algorithm is applied on expensive large-scale problems including CEC 2017 benchmark and realistic optimal power dispatch (OPD) to confirm its scalability. Experimental results affirm that the Q-SCA is performs steadily, and it has a promising overall performance among several compared algorithms.

Introduction

The systems of nonlinear equations (SNLEs) are an important class of problems in the area of optimization, because many real-world applications arising in engineering, economics, chemistry, and physics can be modeled as systems of nonlinear equations (Hoffman 2001; Kelley 2003; Yuan and Lu 2008). Thus, finding the solutions to SNLEs has been a great concern in view point of applications. However, SNLEs have some difficulties with complex systems, such as difficulty in calculating the derivatives, sensitivity to the initial conditions and requirement of a large memory.

There are tremendous traditional techniques such as Newton method, Secant method and Muller’s method (Goyel 2007) are not capable of efficiently handling SNLEs with a complex nature, because they are extremely sensitive to the choice of initial guess of the solution and may come into oscillatory conduct or may come into divergence sate, especially, when the good initial value about the candidate root hasn’t picked. Furthermore, they are incapable specifically, for a large scale system, where they require a high memory. In addition, most of these techniques are derivative based and the condition of derivative may not be fulfilled for some functions.

To dispose of awkward aspects related to the traditional techniques, many approaches based on metaheuristic algorithms have been developed (Rizk 2018a, b, c, d; Rizk-Allah 2018; Rizk-Allah et al. 2019). Metaheuristic algorithms, such as genetic algorithm (GA) (Holland 1975), differential evolution (DE) (Das and Suganthan 2011), particle swarm optimization (PSO) (Kennedy and Eberhart 1995; Wang et al. 2014), ant colony optimization (ACO) (Dorigo et al. 1996; Rizk-Allah 2014), firefly algorithm (FA) (Yang 2010; Rizk-Allah 2016a), and fruit fly optimization algorithm (FOA) (Pan 2012; Rizk-Allah 2016b), have offered promising efficiency for solving complex optimization problems and among others (El-Sawy et al. 2013; El-Sawy et al. 2013; Rizk-Allah et al. 2013; Rizk 2017).

Recently, metaheuristic algorithms have attracted much attention to solve systems of nonlinear equations (SNLEs) (Wu et al. 2011; Jaberipour et al. 2011; Wu and Kang 2003; Dai et al. 2008; Ouyang et al. 2009; Luo et al. 2008; Mo et al. 2009; Wolpert and Macready 1997). Wu et al. (2011) hybridized social emotional optimization algorithm and metropolis principle while Jaberipour et al. (2011) introduced a novel approach based on PSO to deal with SNLEs. Wu and Kang (2003) presented a parallel elite-subspace evolutionary algorithm to solve SNLEs. Dai et al. (2008) proposed a combined algorithm of genetic algorithm and quasi-Newton method for solving SNLEs. Ouyang et al. (2009) hybridized PSO and Nelder–Mead simplex method to solve SNLEs, while, Luo et al. (2008) embedded the chaotic optimization to improve the performance of gradient method. Mo et al. (2009) proposed a conjugate direction particle swarm optimization with the aim of reducing high-dimensionality of the problem. However, no free lunch (NFL) theorem (Wolpert and Macready 1997) proves that no single optimization algorithm is the prominent for all problems, so it is necessary to introduce more efficient algorithms to get a higher-quality solution for the SNLEs.

Sine cosine algorithm (SCA) is one of the recently developed metaheuristic algorithms (Seyedali Mirjalili 2016). It was presented by Seyedali (Seyedali Mirjalili 2016) for solving optimization problems. SCA is a mathematical model technique that simulates the behaviors of sine and cosine functions to obtain the best solution (destination). In SCA, a set of solutions are created randomly, then they are updated based on sine and cosine functions by fluctuating these solutions whether outwards or towards the destination to create new solutions. However, as a recent optimization method, SCA contains some disadvantages that deteriorate its performance characteristic. The first one is that the guidance strategy through four some random parameters may impedes the diversity of solutions and leads to the stucking on local optima. The second is SCA does not have efficient strategy to improve the quality of solutions during each generation which may produce poor quality of solutions. Besides, to the best of our knowledge no endeavors have been reported in the literature to introduce SCA for solving system of nonlinear equations and its applications in power system.

It is worth mentioning here that the author has developed some works based on SCA (Rizk 2018b, d) recently. However, the methodology developed in this study is completely different in terms of the inspiration-based quantum search, and real-world application. For instance, the presented work in Ref. (Rizk 2018b) introduces the SCA-based multi-orthogonal search strategy using discretization mechanism to get the orthogonal arrays. The multi-arrays are performed and then invoked in sequential form to improve the position of best so far solution. This algorithm is applied on eighteen benchmark functions of unimodal and multimodal, and four constrained engineering design problems. In Ref. (Rizk 2018d), an improved variant of SCA was proposed based on two improvements. Firstly, an opposition strategy is introduced to enhance the diversity of solutions, while the second one develops the multiple-orthogonal arrays strategy which is invoked in a parallel form to improve the quality of the final solution. Also the proposed method is benchmarked on 10 unconstrained optimization problems, 4 constrained optimization and 6 engineering design applications. The proposed study introduces an improved variant of SCA based on quantum local search for the first time. Furthermore, from the application point of view, the presented method is applied to solve systems of nonlinear equations, CEC 2017 real problems as well as realistic optimal power dispatch (OPD) for the first time.

In this paper, a new hybrid algorithm called a quantum-based sine cosine algorithm (named as Q-SCA) to cope with SNLEs. The proposed algorithm hybridizes the features of the sine cosine algorithm with quantum computing. The main features of the proposed methodology consist in tuning the search space around the destination solution dynamically, so that the search space is shrunken gradually with increasing number of iterations. Further, quantum computing scheme is incorporated as a local search scheme, namely quantum local search (QLS), in order to further strengthen and improve the solutions obtained in the sine cosine algorithm stage. The inherent features of this hybridization can increase the diversification and intensification of the Q-SCA and then can prevent trapping into the local optimum. The proposed algorithm is tested on twelve SNLEs, two electrical applications, CEC 2017 benchmark and realistic OPD problem. Experimental results proved that Q-SCA can get the more powerful solutions than the comparative algorithms.

The main contributions of this approach are to:

  1. (1)

    Introduce a novel quantum-based sine cosine algorithm (Q-SCA) for solving SNLEs. In Q-SCA, an updating strategy is designed and implemented randomly around the current solution and destination to improve the diversity of solutions efficiently.

  2. (2)

    Integrate intelligently the merits of two phases namely; sine cosine algorithm (SCA) and quantum local search (QLS), then it can avoid the trapping into the local optima.

  3. (3)

    Improve the exploration capabilities of the SCA phase to seek the overall search space while incorporate QLS phase as a counterpart in sequential form to enhance the exploitation tendency and refine the quality of solutions.

  4. (4)

    The integration between SCA and QLS improve the quality of solutions and speed up the convergence to the global solution.

  5. (5)

    The effectiveness of Q-SCA is validated and demonstrated through twelve case studies from literature and two real applications of power system to evaluate the bus variables.

  6. (6)

    The scalability test of the proposed Q-SCA algorithm is confirmed by applying it on expensive large-scale problems including CEC 2017 benchmark and realistic optimal power dispatch (OPD) problem.

The novelty of this study resides primarily in implementing and using an improved variant of sine cosine algorithm (SCA), named quantum-based SCA (Q-SCA), through three amendments to overcome the shortages of conventional SCA variant: (1) Non-linear bridging mechanism is developed to perform comparatively better transition from the exploration phase to exploitation phase rather than the linear transition. (2) A modified searching equation is introduced to enrich the population diversity and prevent the skipping of promising regions. (3) A quantum local search (QLS) strategy is presented to enhance neighborhood-informed capacities and avoid falling in local optimum. (4) The scalability test is strictly investigated by applying the proposed method for solving large-scale practical optimization tasks such as CEC 2017 datasets and optimal power dispatch (OPD) optimization problem. Additionally, the proposed Q-SCA is implemented and employed to solve systems of nonlinear equations and two electrical applications for the first time to effectively envisage an accurate, reliable and efficient alternative while dealing with realistic systems of nonlinear equations.

The main frame of the paper is organized as follows. In Sect. 2, the preliminaries for the system of nonlinear equations and the basics of SCA are described. Section 3 presents the proposed Q-SCA in detail. Experimental results obtained for different systems are discussed in Sect. 4. Finally, Sect. 5 delivers the conclusion of this work and the future research.

Preliminaries

System formulation

The general form of the system of nonlinear equations (SNLEs) can be stated as follows:

$${}^{{{\mathbf{SNLEs:}}\,\,}}\left\{ {\begin{array}{*{20}c} {f_{1} (x_{1} ,x_{2} ,...,x_{i} ,...,x_{n} ) = 0} \\ {f_{2} (x_{1} ,x_{2} ,...,x_{i} ,...,x_{n} ) = 0} \\ \vdots \\ \begin{gathered} f_{i} (x_{1} ,x_{2} ,...,x_{i} ,...,x_{n} ) = 0 \hfill \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \vdots \hfill \\ f_{n} (x_{1} ,x_{2} ,...,x_{i} ,...,x_{n} ) = 0 \hfill \\ \end{gathered} \\ \end{array} } \right.$$
(1)

where \(n\) is the number of dimensions, \(x_{i}\) is the \(i{\text{th}}\) decision variable and \(f_{i} (.)\) is the \(i{\text{th}}\) equation.

Definition 1

If \(\forall \,i\,,\,i = \{ 1,2,...,n\} \,,\,\,f_{i} (.) = 0,\,\) then the solution \((x_{1}^{*} ,x_{2}^{*} ,...,x_{i}^{*} ,...,x_{n}^{*} )\) is called the optimal solution of the SNLEs.

In order to solve SNLEs, it is usually transformed into an equivalent single optimization problem as follows:

$$\min \,f({\mathbf{x}}) = \sum\limits_{i = 1}^{n} {f_{i}^{2} ({\mathbf{x}})} \,\,,\,\,\,\,{\mathbf{x}} = (x_{1} ,x_{2} ,...,x_{i} ,...,x_{n} )$$
(2)

where \(\,f({\mathbf{x}})\) is the objective function that will be minimized.

An overview of sine cosine algorithm (SCA)

SCA is a population-based optimization algorithm that is established based on the mathematical sine and cosine functions. Similar to other metaheuristic algorithms, SCA starts the search process by creating set of solutions randomly. Afterwards, these solutions are evaluated according to objective function, while the best one denoted by the destination point is stored and evolved during each iteration. Then the solutions are updated to create new ones according to sine and cosine functions (see Eq. (3)). Finally, the algorithm stops the optimization process when maximum number of iterations is satisfied.

$$\begin{gathered} x_{j,t + 1} = \left\{ {\begin{array}{*{20}c} {x_{j,t} + r_{1} \times \,\sin (r_{2} ) \times |r_{3} P_{j,t} - x_{j,t} |\,;} & {r_{4} < \,0.5} \\ {x_{j,t} + r_{1} \times \cos (r_{2} ) \times |r_{3} P_{j,t} - x_{j,t} |\,;} & {r_{4} \ge \,0.5} \\ \end{array} } \right.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \hfill \\ j = 1,2,...,n \hfill \\ \end{gathered}$$
(3)

where \(n\) is the number of dimensions, \(x_{j,t}\) is the location of the current solution in \(j{\text{th}}\) dimension at \(t{\text{th}}\) iteration and the \(P_{j,t}\)(destination) is the location of the best solution so far at \(t{\text{th}}\) iteration. The parameters \(r_{1} ,r_{2} ,r_{3} ,r_{4}\) are random numbers where \(|.|\) denotes the absolute value. The solution for each search agent is denoted by \(\,{\mathbf{x}}_{i} = (x_{i,1} ,x_{i,2} ,...,x_{i,n} )\,,i = 1,2,...,PS\), PS is population size.

As shown in Eq. (3), the SCA contains four parameters. The first parameter \(r_{1}\) is responsible for determining the candidate region. The second parameter, \(r_{2}\), dictates how far the movement should be towards or outwards the destination. The third parameter,\(r_{3}\), assigns random weights of destination, so it is responsible for emphasizing if \(r_{3} > 1\) or deemphasizing if \(r_{3} < 1\). Finally, the parameter \(r_{4}\) is responsible for switching between the sine term and cosine term in Eq. (3). Further, the parameter \(r_{1}\) is adjusted adaptively to balance between exploration/exploitation capabilities by using Eq. (4):

$$r_{1} = a - \frac{a \times \,t}{T}$$
(4)

where \(a\) is a constant number, \(t\) is the current iteration, \(T\) is the maximum number of iterations. The flowchart of the original SCA is illustrated in Fig. 1.

Fig. 1
figure1

The basic SCA algorithm

Quantum computing

More recently, quantum computing (QC) has been flourished due to its superiority to classical computing on many specialized tasks (Wolpert and Macready 1997). It is inspired by principles and concepts of quantum theory like superposition of quantum states, entanglement and intervention (Jaeger 2006). QC is capable of executing huge sates of quantum simultaneously. In addition, plenty of researches have been progressed to integrate metaheuristic algorithms with QC (Xi et al. 2008; Zouache et al. 2016; Oguz Emrah Turgut 2014). Turgut et al. (Oguz Emrah Turgut 2014) presented a novel PSO based on chaotic quantum integration for solving SNLEs. Although this algorithm provides an effective algorithm than that developed in (Sun et al. 2005) through introducing chaotic maps, however this algorithm suffers from trapping into local optimum for some cases. Also, it takes long time for computation.

Motivation aspects

Although SCA provides an efficient accuracy while dealing with optimization tasks and it can exposes competitive results in comparisons with other well-known population-based optimization techniques, it does not has enough fitting and efficient quality for highly complex tasks of optimization and is still suffer from the dilemma of getting falling in local optima. To overcome these challenges and improve its performance characteristic regarding search capability, a newly integrated version based on SCA and quantum local search (QLS) is proposed to solve real applications of electrical power station. The proposed integrated version is detonated as Q-SCA. In the proposed Q-SCA, the updating mechanism of SCA is improved by searching around the current solution and destination in random manner with the aim to maintain the diversity. Furthermore, the QLS working in a sequential aspect as guidance phase to achieve the promising region and provide the solution quality. Accordingly, by the proposed methodology, it is intended to enhance the performance of convergence, balance the diversification and intensification searches, and improve the seeking process instead of running notable generations without any improvement. The effectiveness of Q-SCA has been demonstrated and investigated through comprehensive experiments of different case studies and two real applications of electrical power station. Experimental results and comparisons affirm that the proposed Q-SCA is a promise searching technique for handling nonlinear system applications of optimization.

The proposed Q-SCA

Until now, there was no other improved variant for SCA in reputable literature. With standard scenario, SCA updates its agents towards the candidate solution based on Eq. (3). However, the SCA, which is an efficient algorithm, is still inclined to convergence in local optima on some cases. Hence, SCA’s problems of immature convergence and obtaining incompetent results can still be experienced. In some cases, the standard SCA is not capable of performing a seamless transition from the exploration to exploitation phases through the use of Eq. (4). In this regard, the basic SCA needs improved operators to tackle problems with more exploration/exploitation capabilities. To relive the above mentioned concerns, quantum local search (QLS) is introduced as a local search stage. QLS can help SCA to search based on deeper exploration/exploitation patterns as an alternative. Using this concept, it can be ensured that SCA can handle global searching tasks more efficiently and not be locked in local optima. Hence, hybridization of SCA with quantum local search, namely quantum sine–cosine algorithm (Q-SCA), attains better searching quality. For this purpose, three main modifications are proposed: (1) a new equation for parameter \(r_{1}\) is introduced to tune the search space dynamically, so that the search space is shrunken gradually as the optima are attained. (2) A new solution generating method is introduced based on exploiting the destination to enhance the quality of solution. (3) QLS is incorporated with the aim of improving the solution quality through generating new solutions around the best obtained solutions so far to achieve an effective exploration and exploitation. Further, QLS can avoid the running of the proposed algorithm without any improvement in the solutions. Based on the above phases, it can be concluded that the proposed Q-SCA is a robust approach and has powerful searching quality. We start the explanation of the Q-SCA as follows.

Phase 1: SCA

  • Step 1 Initialize the agents’ locations

A population of agents are initialized randomly within the search space, where the algorithm assigns a random vector of \(n\) dimensional for the \(i{\text{th}}\) agent, \(\,{\mathbf{x}}_{i} = (x_{i,1} ,x_{i,2} ,...,x_{i,n} )\,\),\(i = 1,2,...,PS\).

  • Step 2 Evaluate the agents

Each agent is evaluated according to the quality of its location according to the desired objective function as shown in Eq. (2), where the best solution (destination) so far is recorded.

  • Step 3 Update the agent’s locations

Agents create new locations inside the search space as follows: the parameters \(r_{1} ,r_{2} ,r_{3} ,r_{4}\) are updated firstly; then the new location of the candidate agent in \(jth\) dimension is created as follows:

$$\begin{gathered} x_{j,t + 1} = \left\{ \begin{gathered} \begin{array}{*{20}c} {x_{j,t} + r_{1} \times \,\sin (r_{2} ) \times |r_{3} P_{j,t} - x_{j,t} |} \\ {x_{j,t} + r_{1} \times \cos (r_{2} ) \times |r_{3} P_{j,t} - x_{j,t} |} \\ \end{array} \,\,\,\left. \begin{gathered} r_{4} < \,0.5 \hfill \\ r_{4} \ge \,0.5 \hfill \\ \end{gathered} \right\}\,;\,\,\,\,r_{5} > p_{j} \hfill \\ \begin{array}{*{20}c} {P_{j,t} + r_{1} \times \,\sin (r_{2} ) \times |r_{3} P_{j,t} - x_{j,t} |} \\ {P_{j,t} + r_{1} \times \cos (r_{2} ) \times |r_{3} P_{j,t} - x_{j,t} |} \\ \end{array} \,\,\left. \begin{gathered} r_{4} < \,0.5 \hfill \\ r_{4} \ge \,0.5 \hfill \\ \end{gathered} \right\}{;}\,\,\,\,r_{5} \le p_{j} \hfill \\ \end{gathered} \right. \hfill \\ \,\,\,i = 1,...,PS,\,j = 1,...,n. \hfill \\ \end{gathered}$$
(5)
$$p_{j} = 0.4 - 0.4\frac{t}{T},\,$$
(6)

where \(p_{i}\) is a mutation probability for switching between the equations states according to the random parameter \(\,r_{5}\).

As seen from the Eq. (5), a new generating mechanism is introduced by exploiting the destination in the updating mechanism. In Eq. (4), the new location is produced by picking the candidate region in linear manner, but this is unsatisfactory for some applications, where for large search space the algorithm needs extensive number of iterations to find the promising region. To overcome this drawback, the performance of the SCA is improved by tuning the parameter \(r_{1}\) dynamically, so that the search space is shrunken gradually as the optima are approaching with increasing number of iterations as follows.

$$r_{1} = \alpha .\left( {a^{\max } - \frac{{t\,(a^{\max } - a^{\min } )}}{T}} \right)\,\,,\,\,\,\alpha = 0.98^{t}$$
(7)

Phase 2: QLS

To enhance the searching quality and prevent trapping into local optimum, QLS is incorporated as a local search strategy. In the beginning, SCA phase operates to explore space and the obtained solutions from the SCA phase are taken as the starting point for the QLS phase. Afterwards the steps of the QLS are applied in order to reach high levels of exploration and exploitation and precise stable convergence to high-quality solutions. Thus, the hybrid mechanism improves the quality of solutions and saves the computational time for exploring a global optimal solution for the nonlinear system of equations. The steps of this phase can be outlined as follows:

  • Step 1 Receive the candidate solutions

This step receives a population of solutions, \(\{ {\mathbf{x}}_{i} \}_{i = 1}^{PS}\), with its best location (destination), \({\mathbf{x}}^{*}\) is from the SCA phase.

  • Step 2 Determine the quantum search scope

In the quantum search scope, the state of a solution is depicted by wave function \(\psi\), instead of random increment, where the random movement of the solution may yields a divergent behavior. The new location is determined by the probability density function, \(|\psi |^{2}\). Let that at iteration \(t\), the solution \(i\) moves in n-dimensional space with a \(\delta\) potential well centered at \(P_{ij}\) on the \(j{\text{th}}\) dimension. Correspondingly, the wave function at iteration \(t + 1\) is described as follows.

$$\psi (x_{ij}^{t + 1} ) = \frac{1}{{\sqrt {L_{ij}^{t} } }}\,\exp ( - {{|x_{ij}^{t + 1} - P_{ij}^{t} |} \mathord{\left/ {\vphantom {{|x_{ij}^{t + 1} - P_{ij}^{t} |} {L_{ij}^{t} }}} \right. \kern-\nulldelimiterspace} {L_{ij}^{t} }}),$$
(8)

where \(L_{ij}^{t}\) is the quantum search scope (standard deviation) of the double exponential distribution which varies with iteration number t.

$$Q(x_{ij}^{t + 1} ) = |\psi (x_{ij}^{t + 1} )|^{2} = \frac{1}{{L_{ij}^{t} }}\,\exp ( - 2{{|x_{ij}^{t + 1} - P_{ij}^{t} |} \mathord{\left/ {\vphantom {{|x_{ij}^{t + 1} - P_{ij}^{t} |} {L_{ij}^{t} }}} \right. \kern-\nulldelimiterspace} {L_{ij}^{t} }}),$$
(9)

Then the probability distribution function \(F\) can be written as follows:

$$F(x_{ij}^{t + 1} ) = 1 - \,\exp ( - 2{{|x_{ij}^{t + 1} - P_{ij}^{t} |} \mathord{\left/ {\vphantom {{|x_{ij}^{t + 1} - P_{ij}^{t} |} {L_{ij}^{t} }}} \right. \kern-\nulldelimiterspace} {L_{ij}^{t} }}),$$
(10)

According to Monte Carlo technique, we can determine the \(j\) dimension of the search agent \({\mathbf{x}}_{i}\) at iteration \(t + 1\) as follows:

$$x_{ij}^{t + 1} = P_{ij}^{t} + \frac{1}{2}L_{ij}^{t} \ln ({1 \mathord{\left/ {\vphantom {1 {u_{ij}^{t + 1} }}} \right. \kern-\nulldelimiterspace} {u_{ij}^{t + 1} }}),$$
(11)

where \(u_{ij}^{t + 1}\) is a random number uniformly distributed over (0, 1) and \(L_{ij}^{t}\) is determined as

$$L_{ij}^{t} = 2\chi |C_{j}^{t} - x_{ij}^{t + 1} |,$$
(12)

where \(C_{{}}^{t}\) is known as the mean agents of the search agents defined as the mean of the positions of all search agents. That is

$$C_{{}}^{t} = (C_{1}^{t} ,C_{2}^{t} ,...,C_{n}^{t} ) = \left( {\frac{1}{{PS}}\sum\limits_{{i = 1}}^{{PS}} {x_{{i1}}^{t} } ,\frac{1}{{PS}}\sum\limits_{{i = 1}}^{{PS}} {x_{{i2}}^{t} } ,...,\frac{1}{{PS}}\sum\limits_{{i = 1}}^{{PS}} {x_{{in}}^{t} } } \right)$$
(13)
  • Step 3 Generate new solution: the position of the search agent is updated according to the following equation

$$x_{ij}^{t + 1} = P_{ij}^{t} \pm \beta |C_{j}^{t} - x_{ij}^{t + 1} |\ln ({1 \mathord{\left/ {\vphantom {1 {u_{ij}^{t + 1} }}} \right. \kern-\nulldelimiterspace} {u_{ij}^{t + 1} }}),$$
(14)
$$\beta = \beta_{\max } .\exp \left( {\log \left( {{{\beta_{\min } } \mathord{\left/ {\vphantom {{\beta_{\min } } {\beta_{\max } }}} \right. \kern-\nulldelimiterspace} {\beta_{\max } }}} \right).\left( {{t \mathord{\left/ {\vphantom {t T}} \right. \kern-\nulldelimiterspace} T}} \right)} \right)$$
(15)

where \(P_{ij}^{t}\) is the local attractor solution that chosen randomly weather from the current population or the destination, \(\beta\) is the search radius in each iteration, \(\beta_{\max }\) is the maximum radius, \(\beta_{\min }\) is the minimum radius. Therefore we obtain a new population that is denoted by \(\{ {\mathbf{x^{\prime}}}_{i} \}_{i = 1}^{PS}\), and its best location (destination) is denoted by \({\mathbf{x^{\prime}}}^{ * }\).

  • Step 4 Update the destination

If the function \(f({\mathbf{x^{\prime}}}^{ * } )\) \(<\) \(f({\mathbf{x}}^{*} )\), update the fitness value and best location (destination) as: \(f({\mathbf{x}}^{*} )\) = \(f({\mathbf{x^{\prime}}}^{ * } )\),\({\mathbf{x}}^{*}\) = \({\mathbf{x^{\prime}}}^{ * }\).

  • Step 5 Stopping quantum search

If no improvement in the function \(f({\mathbf{x}}^{*} )\) for all \(t\) iterations, stop quantum search process and store \({\mathbf{x}}^{*}\) as the best solution. The flowchart of the proposed Q-SCA can be shown in Fig. 2.

Fig. 2
figure2

Architecture of the proposed Q-SCA

Experiment and Results

In this section, twelve systems of nonlinear equations and two electrical applications have been utilized in order to demonstrate the efficiency and robustness of the proposed algorithm. The obtained results have been compared with the other known methods that use the same systems (Oguz Emrah Turgut 2014; Sun et al. 2005; Jäger and Ratz 1995; Sharma and Arora 2013; Grosan and Abraham 2008; Abdollahi et al. 2013, 2016; Oliveira and Petraglia 2013; Floudas et al. 1999; Wang et al. 2011). For these experiments, the algorithm is coded in MATLAB 7, running on a computer with an Intel Core I 5 (1.8 GHz) processor and 4 GB RAM memory. No commercial SCA-based tool was applied in this research. The statistical measures in terms of minimum value, mean deviation value, maximum value and standard deviation for each algorithm for each case are reported in (Oguz Emrah Turgut 2014). Referring to Derrac et al. (Garcia et al. 2009), statistical tests should be completed to judge the superiority of the evaluated algorithms. Statistical tests reflect each test’s results and verify that the differences observed in results are statistically significant. To statistically assess the new Q-SCA compared to other algorithms, Wilcoxon rank-sum test at 5% significance degree is also executed here.

Parameters settings

To show the improvement of the proposed algorithm, the fair comparisons are conducting through unified the parameters regarding the compared algorithms. For all the studied cases, the parameters configurations for all implemented algorithms are set after perfuming a few trials and tabulated in Table 1. Also 30 consecutive algorithm runs is carried out for each case study to demonstrate the stability of the proposed algorithms regarding some of the statistical measures.

Table 1 The parameters settings the implemented algorithms
  • Case 1 The following system has been studied in (Oguz Emrah Turgut 2014; Sun et al. 2005) where the system equations are defined as follows

$$\begin{gathered} \,f_{1} ({\mathbf{x}}) = 2x_{1} + x_{2} + x_{3} + x_{4} + x_{5} = 0 \hfill \\ f_{2} ({\mathbf{x}}) = x_{1} + 2x_{2} + x_{3} + x_{4} + x_{5} = 0 \hfill \\ f_{3} ({\mathbf{x}}) = x_{1} + x_{2} + 2x_{3} + x_{4} + x_{5} = 0 \hfill \\ f_{4} ({\mathbf{x}}) = x_{1} + x_{2} + x_{3} + 2x_{4} + x_{5} = 0 \hfill \\ f_{5} ({\mathbf{x}}) = x_{1} x_{2} x_{3} x_{4} x_{5} - 1.0 = 0 \hfill \\ - 2 \le x_{i} \le 2,\,i = 1,2,...,5. \hfill \\ \end{gathered}$$
(16)

This system contains five equations (\({f}_{1},{f}_{2},{f}_{3},{f}_{4},{f}_{5}\)) that needed to be optimized or solved simultaneously, and five unknowns (\({x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5}\)). In this context, the proposed optimization methodology aims to achieve the optimal solutions for these unknowns and then the values for the corresponding equations (\({\mathrm{f}}_{1},{\mathrm{f}}_{2},{\mathrm{f}}_{3},{\mathrm{f}}_{4},{\mathrm{f}}_{5}\)) are calculated, where the obtained results for these unknowns and values for the corresponding equations are recorded in Table 2. Table 2 shows the optimum solutions obtained by the proposed Q-SCA and SCA without incorporating the quantum local search phase (i.e. SCA). The proposed Q-SCA is compared with other algorithms (Oguz Emrah Turgut 2014) such as chaotic quantum behaved particle swarm optimization algorithm (L-QPSO), quantum behaved particle swarm optimization (QPSO), gravitational search algorithm (GRAV), intelligent tuned harmony search (ITHS) algorithm and other literature studies. The proposed Q-SCA surpasses the other algorithms since it has better results than the other algorithms but is competitive with L-QPSO and Jäger and Ratz (1995).

Table 2 The simulation results of the proposed Q-SCA and different approaches for case 1

Further, the convergence behavior for case 1 is shown in Fig. 3a.

Fig. 3
figure3figure3

The convergence behavior for all cases

  • Case 2 The system has been introduced in (Oguz Emrah Turgut 2014) and the system is stated as follows

$$\begin{gathered} \,f_{1} ({\mathbf{x}}) = x_{1} + \frac{{x_{2}^{4} x_{4} x_{6} }}{4} + 0.75 = 0 \hfill \\ f_{2} ({\mathbf{x}}) = x_{2} + 0.405\exp (1 + x_{1} x_{2} ) - 1.405 = 0 \hfill \\ f_{3} ({\mathbf{x}}) = x_{3} - \frac{{x_{4} x_{6} }}{2} + 1.5 = 0 \hfill \\ f_{4} ({\mathbf{x}}) = x_{4} - 0.605\exp (1 - x_{3}^{2} ) - 0.395 = 0 \hfill \\ f_{5} ({\mathbf{x}}) = x_{5} - \frac{{x_{2} x_{6} }}{2} + 1.5 = 0 \hfill \\ f_{6} ({\mathbf{x}}) = x_{6} - x_{1} x_{5} = 0 \hfill \\ \end{gathered}$$
(17)

Table 3 shows that the optimum solution obtained by the proposed Q-SCA, where the proposed Q-SCA outperforms other algorithms since it has better results than the other algorithms. The convergence behavior for case 2 is shown in Fig. 3b.

Table 3 The simulation results of the proposed Q-SCA and different approaches for case 2
  • Case 3 The system has been developed in (Oguz Emrah Turgut 2014) and its formulation is stated as follows

$$\begin{gathered} \,f_{i} ({\mathbf{x}}) = x_{i} - \cos \left( {2x_{i} - \sum\limits_{j = 1}^{4} {x_{j} } } \right) = 0, \hfill \\ i = 1,2,3,4. \hfill \\ \end{gathered}$$
(18)

The simulation results for this case are shown in Table 4. Table 4 illustrates the results obtained for this case, where the simulations revealed that the proposed Q-SCA is competitive with some algorithms but it outperforms the GRAV. The convergence behavior for case 3 is depicted in Fig. 3c.

Table 4 The simulation results of the proposed Q-SCA and different approaches for case 3
  • Case 4 The Neurophysiology application (Oguz Emrah Turgut 2014) is utilized to test the efficiency of the proposed algorithm and system equations is described as follows

$$\begin{gathered} \,f_{1} ({\mathbf{x}}) = x_{1}^{2} + x_{3}^{2} = 1 \hfill \\ f_{2} ({\mathbf{x}}) = x_{2}^{2} + x_{4}^{2} = 1 \hfill \\ f_{3} ({\mathbf{x}}) = x_{5} x_{3}^{3} + x_{6} x_{4}^{3} = 0 \hfill \\ f_{4} ({\mathbf{x}}) = x_{5} x_{1}^{3} + x_{6} x_{2}^{3} = 0 \hfill \\ f_{5} ({\mathbf{x}}) = x_{5} x_{1} x_{3}^{2} + x_{6} x_{4}^{2} x_{2} = 0 \hfill \\ f_{6} ({\mathbf{x}}) = x_{5} x_{1}^{2} x_{3} + x_{6} x_{2}^{2} x_{4} = 0 \hfill \\ - 10 \le x_{i} \le 10,\,i = 1,2,...,6. \hfill \\ \end{gathered}$$
(19)

Table 5 shows that the optimum solution obtained by the proposed Q-SCA algorithm is competitive with LQPSO and QPSO algorithms but it outperforms SCA, ITHS, GRAV and Grosan et. al. (Grosan and Abraham 2008). Although the LQPSO and QPSO algorithms give the same results after 747 iterations, the proposed Q-SCA reaches the optimum solution after 308 iterations and then it outperforms all algorithms from the view point of convergence speed. The convergence behavior for case 4 is shown in Fig. 3d.

Table 5 The simulation results of the proposed Q-SCA and different approaches for case 4
  • Case 5 This case describes the interval arithmetic problem that has been presented in (Oguz Emrah Turgut 2014). The nonlinear system is formulated as follows

$$\begin{gathered} \,f_{1} ({\mathbf{x}}) = x_{1} - 0.25428722 - 0.18324757x_{4} x_{3} x_{9} = 0 \hfill \\ f_{2} ({\mathbf{x}}) = x_{2} - 0.37842197 - 0.16275449x_{1} x_{10} x_{6} = 0 \hfill \\ f_{3} ({\mathbf{x}}) = x_{3} - 0.27162577 - 0.16955071x_{1} x_{2} x_{10} = 0 \hfill \\ f_{4} ({\mathbf{x}}) = x_{4} - 0.19807914 - 0.15585316x_{7} x_{1} x_{6} = 0 \hfill \\ f_{5} ({\mathbf{x}}) = x_{5} - 0.44166728 - 0.19950920x_{7} x_{6} x_{3} = 0 \hfill \\ f_{6} ({\mathbf{x}}) = x_{6} - 0.14654113 - 0.18922793x_{8} x_{5} x_{10} = 0 \hfill \\ f_{7} ({\mathbf{x}}) = x_{7} - 0.42937161 - 0.21180476x_{2} x_{5} x_{8} = 0 \hfill \\ f_{8} ({\mathbf{x}}) = x_{8} - 0.07056438 - 0.17081208x_{1} x_{7} x_{6} = 0 \hfill \\ f_{9} ({\mathbf{x}}) = x_{9} - 0.34504906 - 0.19612740x_{10} x_{6} x_{8} = 0 \hfill \\ f_{10} ({\mathbf{x}}) = x_{10} - 0.42651102 - 0.21466544x_{4} x_{8} x_{1} = 0 \hfill \\ \end{gathered}$$
(20)

The simulation results are recorded in Table 6. From Table 6, the comparisons between the proposed Q-SCA with different algorithms (Oguz Emrah Turgut 2014; Grosan and Abraham 2008; Oliveira and Petraglia 2013) are conducted. Based on simulation results, it can conclude that the proposed Q-SCA algorithm is competitive with L-QPSO and QPSO and it outperforms the other algorithms.

Table 6 The simulation results of the proposed Q-SCA and different approaches for case 5

Further Q-SCA has better convergence speed than other L-QPSO and QPSO since it reached the optimum solution after 474 iterations while the other converged after 487 iterations (Oguz Emrah Turgut 2014). The convergence behavior for case 5 is shown in Fig. 3e.

  • Case 6 This case presents the inverse position problem of the six-revolute joint application that taken from (Oguz Emrah Turgut 2014) and system equations are outlined as follows

$$\begin{array}{*{20}l} {f_{{1i}} ({\mathbf{x}}) = x_{i}^{2} + x_{{i + 1}}^{2} - 1 = 0} \hfill \\ \begin{aligned} f_{{2i}} ({\mathbf{x}}) & = a_{{1i}} x_{1} x_{3} + a_{{2i}} x_{1} x_{4} + a_{{3i}} x_{2} x_{3} + a_{{4i}} x_{2} x_{4} \\ & + a_{{5i}} x_{2} x_{7} + a_{{6i}} x_{5} x_{8} + a_{{7i}} x_{6} x_{7} + a_{{8i}} x_{6} x_{8} \\ & + a_{{9i}} x_{1} + a_{{10i}} x_{2} + a_{{11i}} x_{3} + a_{{12i}} x_{4} + a_{{13i}} x_{5} \\ & + a_{{14i}} x_{6} + a_{{15i}} x_{7} + a_{{16i}} x_{8} + a_{{17i}} = 0 \\ \end{aligned} \hfill \\ \end{array}$$
(21)

The coefficients used in this system, \(a_{ij}\), are presented in Table 7, where \(1 \le i \le 17,\,1 \le j \le 4\).

Table 7 Coefficients for case 6

Table 8 demonstrates the comparison between the proposed Q-SCA algorithm with different algorithms (Oguz Emrah Turgut 2014; Grosan and Abraham 2008) for case 6. The proposed Q-SCA outperforms the other algorithms in term of optimality. The convergence behavior for this case is portrayed in Fig. 3f.

Table 8 The simulation results of the proposed Q-SCA and different approaches for case study 6
  • Case 7 This case simulates the combustion problem that occurred at a temperature of 3000 °C and it has been studied in (Oguz Emrah Turgut 2014) and the general form of this system can be defined as follows

$$\begin{gathered} \,f_{1} ({\mathbf{x}}) = x_{2} + 2x_{6} + x_{9} + 2x_{10} - 10^{ - 5} = 0 \hfill \\ f_{2} ({\mathbf{x}}) = x_{3} + x_{8} - 3 \times 10^{ - 5} = 0 \hfill \\ f_{3} ({\mathbf{x}}) = x_{1} + x_{3} + 2x_{5} + 2x_{8} + x_{9} + x_{10} - 5 \times 10^{ - 5} = 0 \hfill \\ f_{4} ({\mathbf{x}}) = x_{4} + 2x_{7} - 10^{ - 5} = 0 \hfill \\ f_{5} ({\mathbf{x}}) = 0.5140437 \times 10^{ - 7} x_{5} - x_{1}^{2} = 0 \hfill \\ f_{6} ({\mathbf{x}}) = 0.1006932 \times 10^{ - 6} x_{6} - x_{1}^{2} = 0 \hfill \\ f_{7} ({\mathbf{x}}) = 0.7816278 \times 10^{ - 15} x_{7} - x_{4}^{2} = 0 \hfill \\ f_{8} ({\mathbf{x}}) = 0.1496236 \times 10^{ - 6} x_{8} - x_{1} x_{3} = 0 \hfill \\ f_{9} ({\mathbf{x}}) = 0.6194411 \times 10^{ - 7} x_{9} - x_{1} x_{2} = 0 \hfill \\ f_{10} ({\mathbf{x}}) = 0.2089296 \times 10^{ - 14} x_{10} - x_{1} x_{2}^{2} = 0 \hfill \\ \end{gathered}$$
(22)

Table 9 demonstrates the comparison between the proposed Q-SCA with different algorithms (Oguz Emrah Turgut 2014; Grosan and Abraham 2008; Oliveira and Petraglia 2013) for case 7. The proposed Q-SCA outperforms the other algorithms in term of optimality. The convergence behavior for this case can be provided in Fig. 3g.

Table 9 The simulation results of the proposed Q-SCA and different approaches for case 7
  • Case 8 This system contains eight nonlinear equations as shown in (Oguz Emrah Turgut 2014) and it is defined as follows

$$\begin{array}{*{20}l} \begin{aligned} f_{1} ({\mathbf{x}}) = & \;4.731 \times 10^{{ - 3}} x_{1} x_{3} - 0.3578x_{2} x_{3} - 0.1238x_{1} \\ & + x_{7} - 1.637 \times 10^{{ - 3}} x_{2} - 0.9338x_{4} - 0.3571 = 0 \\ \end{aligned} \hfill \\ \begin{aligned} f_{2} ({\mathbf{x}}) = & \;0.2238x_{1} x_{3} + 0.7623x_{2} x_{3} + 0.2638x_{1} \\ & - x_{7} - 0.007745x_{2} - 0.6734x_{4} - 0.6022 = 0 \\ \end{aligned} \hfill \\ \begin{gathered} f_{3} ({\mathbf{x}}) = x_{6} x_{8} + 0.3578x_{1} + 4.731 \times 10^{{ - 3}} x_{2} = 0 \hfill \\ f_{4} ({\mathbf{x}}) = - 0.7623x_{1} + 0.2238x_{2} + 0.3461 = 0 \hfill \\ f_{5} ({\mathbf{x}}) = x_{1}^{2} + x_{2}^{2} - 1 = 0 \hfill \\ f_{6} ({\mathbf{x}}) = x_{3}^{2} + x_{4}^{2} - 1 = 0 \hfill \\ f_{7} ({\mathbf{x}}) = x_{5}^{2} + x_{6}^{2} - 1 = 0 \hfill \\ f_{8} ({\mathbf{x}}) = x_{7}^{2} + x_{8}^{2} - 1 = 0 \hfill \\ \end{gathered} \hfill \\ \end{array}$$
(23)

The comparisons between the proposed Q-SCA and the other algorithms are shown in Table 10, where the proposed Q-SCA is capable of attaining better solutions compared to the other algorithms. Further, the convergence curve for case 8 is portrayed in Fig. 3h.

Table 10 The simulation results of the proposed Q-SCA and different approaches for case 8
  • Case 9 This case is devoted to find the optimal solution of a thin wall rectangle girder section (Oguz Emrah Turgut 2014), where the system is considered as follows

$$\begin{gathered} \,f_{1} ({\mathbf{x}}) = bh - (b - 2t)(h - 2t) = 165 \hfill \\ f_{2} ({\mathbf{x}}) = \frac{{bh^{3} }}{12} - \frac{(b - 2t)(h - 2t)}{{12}} = 9369 \hfill \\ f_{3} ({\mathbf{x}}) = \frac{{2t(h - t)^{2} (b - t)^{2} }}{h + b - 2t} = 6835 \hfill \\ \end{gathered}$$
(24)

where \(b\) is the width of the section, \(h\) is the height of the section, and \(t\) is the section thickness. Table 11 shows the optimum solutions obtained by the proposed Q-SCA competitive with the LQPSO, Abdollahi et al. (2013), Mo et al. (2009) and Luo et al. (2008) algorithms but it outperforms the SCA, QPSO, ITHS, GRAV and Jaberipour et al. (2011). The convergence behavior for case 9 is shown in Fig. 3i.

Table 11 The simulation results of the proposed algorithm and different approaches for case 9
  • Case 10 This case is taken from (Jaberipour et al. 2011; Abdollahi et al. 2013, 2016) to demonstrate the robustness of the proposed Q-SCA in handling the systems of nonlinear equations

$$\begin{gathered} \,f_{1} ({\mathbf{x}}) = x_{1}^{{x_{2} }} + x_{2}^{{x_{1} }} - 5x_{1} x_{2} x_{3} = 85 \hfill \\ f_{2} ({\mathbf{x}}) = x_{1}^{3} - x_{2}^{{x_{3} }} - x_{3}^{{x_{2} }} = 60 \hfill \\ f_{3} ({\mathbf{x}}) = x_{1}^{{x_{3} }} + x_{3}^{{x_{1} }} - x_{2} = 2 \hfill \\ \end{gathered}$$
(25)

The solution for this system was (4, 3, 1) as in (Jaberipour et al. 2011; Abdollahi et al. 2013, 2016). The proposed Q-SCA finds the global optimal solution for this case while the convergence behavior of the Q-SCA algorithm outperforms the other algorithms. Further, the proposed Q-SCA algorithm takes 120 iterations and 20 population size to reach the optimal solution whereas (Abdollahi et al. 2013, 2016) and (Jaberipour et al. 2011) had been reached to that answer with 200, 250, and 1000 iterations, respectively, where each of them has been used 250 agents for the population size. The results of the proposed algorithm are shown as in Table 12 and Fig. 3j.

Table 12 The simulation results of the proposed algorithm for case study 10
  • Case 11 This case was given in (Jaberipour et al. 2011; Abdollahi et al. 2013, 2016)

$$\begin{gathered} f_{1} ({\mathbf{x}}) = x_{1}^{3} - 3x_{1} x_{2}^{2} - 1 = 0 \hfill \\ f_{2} ({\mathbf{x}}) = 3x_{1}^{2} x_{2} - x_{2}^{3} + 1 = 0 \hfill \\ \end{gathered}$$
(26)

Table 13 demonstrates the optimum solution found by the proposed Q-SCA. From Table 13, we can see that the proposed Q-SCA outperforms the other algorithms in view of optimal value. In addition, the convergence behavior of case 11 is shown in Fig. 3k.

Table 13 The simulation results of the proposed algorithm and different approaches for case study 11
  • Case 12 This case was given in (Abdollahi et al. 2013, 2016; Floudas et al. 1999; Wang et al. 2011)

$$\begin{gathered} \,f_{1} ({\mathbf{x}}) = 0.5\sin (x_{1} x_{2} ) - {{0.25x_{2} } \mathord{\left/ {\vphantom {{0.25x_{2} } \pi }} \right. \kern-\nulldelimiterspace} \pi } - 0.5x_{1} = 0 \hfill \\ f_{2} ({\mathbf{x}}) = (1 - {{0.25} \mathord{\left/ {\vphantom {{0.25} {\pi )(\exp (2x_{1} ) - e) + {{ex_{2} } \mathord{\left/ {\vphantom {{ex_{2} } \pi }} \right. \kern-\nulldelimiterspace} \pi }}}} \right. \kern-\nulldelimiterspace} {\pi )(\exp (2x_{1} ) - e) + {{ex_{2} } \mathord{\left/ {\vphantom {{ex_{2} } \pi }} \right. \kern-\nulldelimiterspace} \pi }}} - 2ex_{1} = 0 \hfill \\ \end{gathered}$$
(27)

The results for this case are shown in Table 14. The optimum solution obtained by the proposed Q-SCA is competitive with Abdollahi et al. (Abdollahi et al. 2013) and it outperforms the other algorithms in view of optimal value. Although Abdollahi et al. (Abdollahi et al. 2013) get the same optimal value but it has elapsed a large number of iteration (i.e. about 180), where the proposed Q-SCA has a less number of iterations (i.e., 63 iterations). In addition, the convergence behavior of case 12 is shown in Fig. 3l.

Table 14 The simulation results of the proposed algorithm and different approaches for case study 12

To evaluate the efficiency and feasibility of the proposed Q-SCA algorithm, the statistical measures for each case are reported in Table 15, such as best, mean, median and worst objective values, and standard deviations (SD) over 30 independent runs, where best result is given in bold font.

Table 15 The simulation results of the proposed algorithm using the statistical measures

Wilcoxon signed-rank test

The Wilcoxon signed ranks test is a nonparametric procedure used in a hypothesis testing situation involving a design with two samples (Garcia et al. 2009). It is a pair-wise test that aims to detect significant differences between the behaviors of two methods. It is associated with p-value, where p is the probability of the null hypothesis being true. The result of the test is returned in \(p\) < 0.05 indicates a rejection of the null hypothesis, while \(p\) > 0.05 indicates a failure to reject the null hypothesis. The \(R^{ + }\) is the sum of positive ranks, while \(R^{ - }\) is the sum of negative ranks. In Table 16, we present the results of the Wilcoxon signed-rank test for the different algorithms where the proposed Q-SCA is compared against LQPSO, QPSO, GRAV, ITHS, and SCA. We can conclude from Table 16 that the proposed Q-SCA is a significant algorithm and it is better than the other algorithms.

Table 16 Wilcoxon test for comparison results in Table 15

Power system applications

This section is devoted to validate the proposed algorithm from the view point of applications (Saadat 2004). Two electrical networks have been selected where one-line diagram for each network is shown in Fig. 4a shows the three bus system with generation at bus 1 (slack bus), where the voltage magnitude at bus 1 is adjusted to 1.05 per unit. The complete date of this system is marked in per unit on a 100-MVA base and the line charging susceptances are neglected as in Fig. 4a. On the other hand, Fig. 4b shows the one-line diagram for five bus system where complete system data is marked in per unit on a 100-MVA base as shown in Table 17.

Fig. 4
figure4

One-line diagram for the two systems. a three bus system b five bus system

Table 17 The complete data for the five bus system

The results for two cases are depicted in Tables 18 and 19 respectively, where the optimum solution for voltage at each node and the phase angle are reported. The obtained results demonstrate the effectiveness and robustness of the proposed algorithm in solving the power system applications. Further the convergence behavior is portrayed in Fig. 5.

Table 18 The results for the three bus system
Table 19 The results for the five bus system
Fig. 5
figure5

The convergence behavior for two systems

From the reported results, we can see that the proposed Q-SCA is obviously superior to the reported algorithms. Thus we can conclude that the proposed algorithm is more environmentally friendly and cost-effective scheme in solving the nonlinear system equations from the view point of realization.

Convergence analysis

In order to analyze the convergence analysis of the proposed algorithm, statistical measures, and non parametric test namely, wilcoxon signed ranks (WSRs) test were developed. Based on the obtained simulation results as recorded in Table 15, we can see that the proposed Q-SCA has better searching quality, where the proposed Q-SCA outperforms other comparative algorisms for cases 2, 6, 7, and 9 and it is competitive with some algorithms for other cases. Moreover, the stability of the proposed Q-SCA is investigated as shown in Table 15, where the worst solutions found by the proposed Q-SCA is better than the best one found by the other algorithms. Further the non parametric WSRs is employed to offer the winner algorithm, where Table 16 showed that the proposed algorithm outperforms LQPSO, QPSO, GRAV, ITHS, and SCA algorithms in terms of the obtained p-value. Regarding the presented analyses, it can be concluded that the inherent characteristic of this improvement is contained in incorporating the quantum strategy as a local search strategy that accelerates the convergence behavior and avoids the systematic running of the algorithm without any improvements in the outcomes. It can be concluded that the proposed Q-SCA has a significant performance, and the immature convergence inaccuracies of SCA phase is mitigated efficiently.

In this subsection, a comparative study has been performed to evaluate the performance of the proposed Q-SCA algorithm regarding the integrated scheme, closeness to optimal solution and computational time. On one hand, pure techniques are still suffering from reaching an optimal solution in a suitable time. Also, the sinking into stagnation and premature convergence may be occurs in some of pure techniques. Consequently, the integrated sine cosine algorithm with quantum local search has twofold features; avoiding the premature convergence and improving the performance by using the properties of quantum behavior. On the other hand, the proposed Q-SCA algorithm is highly competitive when comparing it with the other methods in terms of calculating the statistical measures. So the use of the quantum behavior has a great potential for solving nonlinear systems and their applications.

Application of the Q-SCA in real real-world optimization problems

In this section, the proposed methodology is implemented for solving CEC 2017 benchmark problems and optimal power dispatch (OPD) problem as large-scale real-world optimization tasks.

Real-world problems from CEC 2017

For further investigating the performance of the proposed Q-SCA, it is applied to solve CEC2017 problems which are more competitive suits. The description of the CEC2017 problems is listed in Table 20. Table 20 exhibits the nature of each problem (Unimodal (U), Multimodal (M), Hybrid (H), and Composition (C) functions) with the global optimum \(\left({{\varvec{F}}}^{*}\right)\) and the bounds for each dimension. As the CEC2017 problems involve highly complex composite and hybrid natures (Aydilek 2018), robust optimizers to achieve a suitable accuracy with fast rate are needed. In this context, the proposed Q-SCA algorithm is investigated on the CEC2017 suites with 30 dimensions (30D) and 100 dimensions (100D) to affirm its scalability and robustness. The obtained results by the Q-SCA and SCA are compared with other methods reported from the literature (Aydilek 2018) including PSO (Aydilek 2018), FA (Aydilek 2018), FFPSO (Aydilek 2018), HPSOFF (Aydilek 2018), HFPSO (Aydilek 2018), and HGSO (Hashim et al. 2019). Table 21 lists the statistical measures for the 30D and100D, respectively. Based on the recorded results, it can be seen that the Q-SCA is better than SCA for all problems. On the other hand, Tables 22 and 23 records the mean and standard deviation (Std. dev.) for the optimal fitness value for each problem achieved by the proposed Q-SCA and the other competitors for 30D and 100D respectively. As shown from Table 22, Q-SCA outperforms the other competitive methods for 30D except HGSO for \({\mathrm{F}}_{\mathrm{CEC}-8}\),\({\mathrm{F}}_{\mathrm{CEC}-11} , {\mathrm{F}}_{\mathrm{CEC}-21}, {\mathrm{F}}_{\mathrm{CEC}-24}, {\mathrm{F}}_{\mathrm{CEC}-25, }{\mathrm{F}}_{\mathrm{CEC}-27}\), and \({\mathrm{F}}_{\mathrm{CEC}-29}\) problems, while for 100D, the proposed Q-SCA outperforms the other comparative methods. For clarity, the results of the best algorithms among all compared algorithms are marked in boldface. Moreover, the convergence behaviors for some selective problems are depicted in Fig. 6.

Table 20 Characteristic of CEC 2017 benchmark problems
Table 21 Statistical measures achieved by the proposed algorithms for CEC 2017 benchmark problems
Table 22 Comparison of Q-SCA against others in solving CEC 2017 benchmark problems under 30D
Table 23 Comparison of Q-SCA against others in solving CEC 2017 benchmark problems under 100D
Fig. 6
figure6

Convergence curves of the proposed algorithms for 30D

OPD problem-based economical operation

To further investigate the effectiveness of the Q-SCA, it is applied to solve the realistic optimal power dispatch (OPD) problem and comparisons with the recent methods from the literature.

The basis of formulation of the OPF can be described as follows:

$$\begin{gathered} Minimize:\,\,F(\user2{x,u})\,\, \hfill \\ Subject\,to:\left\{ \begin{gathered} G(\user2{x,u}) \le 0 \hfill \\ H(\user2{x,u}) = 0 \hfill \\ \end{gathered} \right. \hfill \\ \end{gathered}$$
(28)

where the function of \(F(\user2{x,u})\,\) denotes the objective function, \({\varvec{x}}\) indicates the vector of dependent variables, \({\varvec{u}}\) defines the vector of control variable, \(G(\user2{x,u})\) represents the inequality constraint, and \(G(\user2{x,u})\) defines the equality constraint.

In this work, the objective function is to minimize the total fuel cost ($/h), \(F\), that is expressed as follows:

$$F = \sum\limits_{i = 1}^{NG} {a_{i} + b_{i} P_{{G_{i} }} + } c_{i} P_{{G_{i} }}^{2}$$
(29)

where \(P_{{G_{i} }}\) is the \(i{\text{th}}\) bus generated real (active) power, \(a_{i}\), \(b_{i}\) and \(c_{i}\) denote the coefficients of the fuel cost for the \(i{\text{th}}\) generator and \(NG\) denotes the number of generators.

The objective function can be optimized under some equality and inequality constraint equations. The equality constraints aim to give the power balanced of load flow that can be described as follows.

$$P_{Gi} - P_{Di} - V_{i} \sum\nolimits_{j = 1}^{NB} {V_{j} [G_{ij} \cos (\delta_{ij} ) + B_{ij} \sin (\delta_{ij} )]} = 0\,\forall i \in NB$$
(30)
$$Q_{Gi} - Q_{Di} - V_{i} \sum\nolimits_{j = 1}^{NB} {V_{j} [G_{ij} \sin (\delta_{ij} ) - B_{ij} \cos (\delta_{ij} )]} = 0\,\forall i \in NB$$
(31)

where \(\delta_{ij} = \left( {\delta_{i} - \delta_{j} } \right)\) denotes the difference of voltage angles between the buses \(i\) and \(j\), respectively, \(V_{i}\) denotes the voltage magnitude at bus \(i\), and \(NB\) defines the number of buses. Here,\(P_{D}\), \(Q_{D}\) denote the active and reactive load demands; \(G_{ij}\), \(B_{ij}\) are the transfer conductance and the susceptance among the buses \(i\) and \(j\), respectively.

On the other hand, the inequality constraints on the present equipment of the power system as well as the restrictions invoked on lines and load buses to acquire system stability and security can be formulated as follows.Generator constraints:

$$P_{{G_{i} }}^{{\min }} \le P_{{G_{i} }} \le P_{{G_{i} }}^{{\max }} ,\;i = 1,2,...,NG$$
(32)
$$V_{{G_{i} }}^{{\min }} \le V_{{G_{i} }} \le V_{{G_{i} }}^{{\max }} ,\;i = 1,2,...,NG$$
(33)
$$Q_{{G_{i} }}^{\min } \le Q_{{G_{i} }} \le Q_{{G_{i} }}^{\max } ,\;i = 1,2,...,NG$$
(34)

Shunt compensator constraints:

$$Q_{{C_{k} }}^{\min } \le O_{{C_{k} }} \le Q_{{C_{k} }}^{\max } ,\;k = 1,2,...,NC$$
(35)

Transformer constraints:

$$T_{j}^{\min } \le T_{j} \le T_{j}^{\max } ,\;j = 1,2,...,NT$$
(36)

Security constraints:

$$V_{{L_{h} }}^{\min } \le V_{{L_{h} }} \le V_{{L_{h} }}^{\max } ,\;h = 1,2,...,NL$$
(37)
$$S_{{l_{p} }} \le S_{{l_{p} }}^{{\max }} ,\;p = 1,2,...,NTL$$
(38)

where \(NT\) is the number of transformers, \(NC\) is the number of shunt compensators, \(NL\) defines the number of load buses and \(NTL\) is the number of transmission lines.

The proposed SCA and Q-SCA have been implemented on the IEEE-30 bus system. Figure 7 exhibits the one-line diagram for the IEEE 30-bus test network that has following features (Bouchekara 2014; Biswas et al. 2018). The system involves 6 generator units located at buses 1, 2, 5, 8, 11, and 13 of the network, and 41 transmission lines. Also, four transformers are located between the transmission lines 6–9, 6–10, 4–12, and 27–28, in voltage limits of (0.9–1.1). Reactive compensation sources in MVAR are placed at the 10, 12, 15, 17, 20, 21, 23, 24, and 29 load buses that are ranged from 0 to 5. Moreover, the voltages of PV buses are limited from 0.95 to 1.1 (p.u.). Operating ranges for the load buses are 0.95–1.05 (p.u.). The other characteristics of this system including the generator cost coefficients, bus data, and line data are detailed in (Bouchekara 2014).

Fig. 7
figure7

IEEE 30-bus system

The proposed Q-SCA and traditional SCA are applied to assess the total fuel cost to conduct the economic operation. The obtained results are recorded in Table 24. Based on these results, it can be observed that the achieved the control variables by Q-SCA can produce the lowest fuel cost than the SCA as well as the other methods. The convergence curve of optimizing the fuel cost is depicted in Fig. 8. Furthermore, the robustness and superiority of the proposed Q-SCA is further proved by the comparisons with other competing methods taken from the literature including ECHT-DE (Daryani et al. 2016), SP-DE (Daryani et al. 2016), SF-DE (Daryani et al. 2016), AGSO (Mohamed et al. 2017), MSA (Warid 2020), TLBO (Kumar and Premalatha 2015), ARCBBO (Pulluri et al. 2018), and SKH [58]. In this context, the results for these methods are provided in Table 25. Based on reported results, it is realized that the Q-SCA outperforms the other state-of-the-art methods as it can provide the lowest value for the fuel cost.

Table 24 Results of control variables obtained by SCA and Q-SCA for IEEE-30 bus
Fig. 8
figure8

Convergence curve for fuel cost by the proposed algorithms

Table 25 The optimum fuel cost obtained by the proposed Q-SCA and other methods for IEEE 30-bus system

Thus from comprehensive experiments and comparisons on various optimization tasks including general systems of nonlinear equations, CEC 2017 benchmark suits, and realistic OPD problem, it can be concluded that the Q-SCA is promising approach and more fruitful than other existing metaheuristic methods.

.

Conclusions

In this paper, to overcome the unique SCA’s problem of being converged to local optima and improving its exploration also exploitation tendencies, it is modified to redistribute search agents according to QLS concept. The experimental results show that the hybridization of SCA with QLS accelerates the convergence performance behavior and enhance the quality of the solutions. The Q-SCA and SCA, L-QPSO, QPSO, GRAV, ITHS algorithm and other literature studies are compared based on the quality of the solutions in resolving 12 nonlinear systems of equations as test cases, two electrical applications, CEC 2017 benchmark and realistic OPD problem. The results verify that Q-SCA has better searching quality.

Careful evidences will discover the following benefits of the proposed Q-SCA.

  1. (1)

    It can efficiently enrich the exploratory capabilities of SCA phase by introducing new updating rule based of the destination solution.

  2. (2)

    The modification on the the parameter \(r_{1}\) can assist SCA phase to be more exploitive in last iterations.

  3. (3)

    QLS phase can assist the algorithm for regaining a correct balance amongst exploration and exploitation tendencies.

  4. (4)

    It surpasses the other algorithms in terms of optimality.

  5. (5)

    It can find the global optimal solution for the benchmark systems.

  6. (6)

    It can deal with expensive large-scale tasks such as CEC 2017 datasets.

  7. (7)

    It can improve the convergence and boost the performance through saving the computational time.

  8. (8)

    The obtained results confirm that Q-SCA is capable to efficaciously escape from local optima in search space throughout optimization.

The future work will be concentrated on four trends: (i) developing new algorithms for these tasks; (ii) solving several complex systems in engineering and science; (iii) solving interval nonlinear system of equations; (iv) solving rough interval nonlinear system of equations. To end with, we hope that this work will motivate other researchers who are working on new metaheuristic algorithms and optimization of electrical power stations.

References

  1. Abdollahi M, Isazadeh A, Abdollahi D (2013) Imperialist competitive algorithm for solving systems of nonlinear equations. Comput Math Appl 65(12):1894–1908

    MathSciNet  MATH  Article  Google Scholar 

  2. Abdollahi M, Abdollahi D, Bouyer A (2016) Improved cuckoo optimization algorithm for solving systems of nonlinear equations. J Supercomput 72(3):1246–1269

    Article  Google Scholar 

  3. Aydilek IB (2018) A hybrid firefly and particle swarm optimization algorithm for computationally expensive numerical problems. Applied Soft Comput 66:232–249

    Article  Google Scholar 

  4. Biswas PP, Suganthan PN, Mallipeddi R, Amaratunga GA (2018) Optimal power flow solutions using differential evolution algorithm integrated with effective constraint handling techniques. Eng Appl Artif Intell 68:81–100

    Article  Google Scholar 

  5. Bouchekara HREH (2014) Optimal power flow using black-hole-based optimization approach. Appl Soft Comput 24:879–888

    Article  Google Scholar 

  6. Dai J, Wu G, Wu Y, Zhu G (2008) Helicopter trim research based on hybrid genetic algorithm. In: World congress on intelligent control and automation, p 2007–2011. IEEE

  7. Daryani N, Hagh MT, Teimourzadeh S (2016) Adaptive group search optimization algorithm for multi-objective optimal power flow problem. Appl Soft Comput 38:1012–1024

    Article  Google Scholar 

  8. Das S, Suganthan P (2011) Differential evolution: a survey of the state of-the-art. IEEE Trans Evol Comput 15(1):4–31

    Article  Google Scholar 

  9. Dorigo M, Maniezzo V, Colorni A (1996) The ant system: optimization by a colony of cooperating agents. IEEE Trans Syst Man Cybern B Cybern 26(1):29–41

    Article  Google Scholar 

  10. El-Sawy AA, Zaki EM, Rizk-Allah RM (2013) Novel hybrid ant colony optimization and firefly algorithm for multi-objective optimization problems. Int J Math Arch 4(1):152–161

    MATH  Google Scholar 

  11. El-Sawy AA, Zaki EM, Rizk-Allah RM (2013) A novel hybrid ant colony optimization and firefly algorithm for solving constrained engineering design problems. J Nat Sci Math 6(1):1–22

    Google Scholar 

  12. Floudas CA, Pardalos PM, Adjiman CS, Esposito WR, Gumus ZH, Harding ST, Klepeis JL, Meyer CA, Schweiger CA (1999) Handbook of test problems in local and global optimization. Kluwer Academic Publishers, Dordrecht

    Google Scholar 

  13. Garcia S, Fernandez A, Luengo J, Herrera F (2009) A study of statistical techniques and performance measures for genetics-based machine learning, accuracy and interpretability. Soft Comput 13:959–977

    Article  Google Scholar 

  14. Goyel M (2007) Computer-based numerical & statistical techniques. Infinity Science Press LLC, Hingham

    Google Scholar 

  15. Grosan C, Abraham A (2008) A new approach for solving nonlinear equations systems. IEEE Trans Syst Man Cybern part A 38(3):698–714

    Article  Google Scholar 

  16. Hashim FA, Houssein EH, Mabrouk MS, Al-Atabany W, Mirjalili S (2019) Henry gas solubility optimization: a novel physics-based algorithm. Future Generation Computer Systems 101:646–667

    Article  Google Scholar 

  17. Hoffman JD (2001) Numerical methods for engineers and scientists, 2nd edn. Marcel Dekker, New York

    Google Scholar 

  18. Holland J (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor

    Google Scholar 

  19. Jaberipour M, Khorram E, Karimi B (2011) Particle swarm algorithm for solving systems of nonlinear equations. Comput Math Appl 62(2):566–576

    MathSciNet  MATH  Article  Google Scholar 

  20. Jaeger G (2006) Quantum information: an overview. Springer, Berlin

    Google Scholar 

  21. Jäger C, Ratz D (1995) A combined method for enclosing all solutions of nonlinear systems of polynomial equations. Reliab Comput 1(1):41–64

    MathSciNet  MATH  Article  Google Scholar 

  22. Kelley CT (2003) Solving nonlinear equations with Newton’s method, vol 1. SIAM, Philadelphia

    Google Scholar 

  23. Kennedy J, Eberhart R (1995) Particle swarm optimization. Proc IEEE Int Conf Neural Netw 4:1942–1948

    Article  Google Scholar 

  24. Kumar AR, Premalatha L (2015) Optimal power flow for a deregulated power system using adaptive real coded biogeography-based optimization. Int J Electr Power Energy Syst 73:393–399

    Article  Google Scholar 

  25. Luo YZ, Tang GJ, Zhou LN (2008) Hybrid approach for solving systems of nonlinear equations using chaos optimization and quasi-Newton method. Appl Soft Comput 8(2):1068–1073

    Article  Google Scholar 

  26. Mo Y, Liu H, Wang Q (2009) Conjugate direction particle swarm optimization solving systems of nonlinear equations. Comput Math Appl 57(11):1877–1882

    MathSciNet  MATH  Article  Google Scholar 

  27. Mohamed AAA, Mohamed YS, El-Gaafary AAM, Hemeida AM (2017) Optimal power flow using moth swarm algorithm. Elec Power Syst Res 142:190–206

    Article  Google Scholar 

  28. Oliveira HA, Petraglia A (2013) Solving nonlinear systems of functional equations with fuzzy adaptive simulated annealing. Appl Soft Comput 13(11):4349–4357

    Article  Google Scholar 

  29. Ouyang A, Zhou Y, Luo Q (2009) Hybrid particle swarm optimization algorithm for solving systems of nonlinear equations. In: International conference on granular computing, GRC’09, p 460–465. IEEE

  30. Pan WT (2012) A new fruit fly optimization algorithm: taking the financial distress model as an example. Knowl-Based Syst 26(2):69–74

    Article  Google Scholar 

  31. Pulluri H, Naresh R, Sharma V (2018) A solution network based on stud krill herd algorithm for optimal power flow problems. Soft Comput 22(1):159–176

    Article  Google Scholar 

  32. Rizk-Allah MR, Hassanien AE (2018c) New binary bat algorithm for solving 0–1 knapsack problem. Complex Intell Syst 4(1):31–53

    Article  Google Scholar 

  33. Rizk-Allah MR (2018d) An improved sine cosine algorithm based on orthogonal parallel information for global optimization. Soft Comput. https://doi.org/10.1007/s00500-018-3355-y

    Article  Google Scholar 

  34. Rizk-Allah RM (2014) A novel multi-ant colony optimization for multi-objective resource allocation problems. Int J Math Arch 5(9):183–192

    Google Scholar 

  35. Rizk-Allah RM (2016a) An improved firefly algorithm based on local search method for solving global optimization problems. Int J Manag Fuzzy Syst 2(6):51–57

    Google Scholar 

  36. Rizk-Allah RM (2016b) Hybridization of fruit fly optimization algorithm and firefly algorithm for solving nonlinear programming problems. Int J Swarm Intel Evol Comput 5(2):1–10

    Article  Google Scholar 

  37. Rizk-Allah RM (2018) Hybridizing sine cosine algorithm with multi-orthogonal search strategy for engineering design problems. J Comput Des Eng 5(2):249–273

    MathSciNet  Google Scholar 

  38. Rizk-Allah RM, Zaki EM, El-Sawy AA (2013) Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems. Appl Math Comput 224(1):473–483

    MathSciNet  MATH  Google Scholar 

  39. Rizk-Allah RM, El-Sehiemy RA, Deb S, Wang GG (2017) A novel fruit fly framework for multi-objective shape design of tubular linear synchronous motor. J Supercomput 73(3):1235–1256

    Article  Google Scholar 

  40. Rizk-Allah RM, Hassanien AE, Bhattacharyya S (2018) Chaotic crow search algorithm for fractional optimization problems. Appl Soft Comput 71:1161–1175

    Article  Google Scholar 

  41. Rizk-Allah RM, El-Sehiemy RA, Wang GG (2018) A novel parallel hurricane optimization algorithm for secure emission/economic load dispatch solution. Appl Soft Comput 63:206–222

    Article  Google Scholar 

  42. Rizk-Allah RM, Hassanien AE, Elhoseny M, Gunasekaran M (2019) A new binary salp swarm algorithm: development and application for optimization tasks. Neural Comput Appl 31(5):1641–1663

    Article  Google Scholar 

  43. Saadat H (1999) Power system analysis. McGraw-Hill, United States

    Google Scholar 

  44. Seyedali Mirjalili SCA (2016) A sine cosine algorithm for solving optimization problems. Knowl-Based Syst 96:120–133

    Article  Google Scholar 

  45. Sharma JR, Arora H (2013) On efficient weighted-Newton methods for solving systems of nonlinear equations. Appl Math Comput 222:497–506

    MathSciNet  MATH  Google Scholar 

  46. Sun J, Xu W, Feng B (2005) Adaptive parameter control for quantum-behavedparticle swarm optimization on individual level. Int Conf Syst, Man Cyber 4:3049–3054

    Article  Google Scholar 

  47. Turgut OE, Turgut MS, Coban MT (2014) Chaotic quantum behaved particle swarm optimization algorithm for solving nonlinear system of equations. Comput Math Appl 68(4):508–530

    MathSciNet  MATH  Article  Google Scholar 

  48. Wang C, Luo R, Wu K, Han B (2011) A new filled function method for an unconstrained nonlinear equation. J Comput Appl Math 235(6):1689–1699

    MathSciNet  MATH  Article  Google Scholar 

  49. Wang G-G, Gandomi AH, Yang X-S, Alavi AH (2014) A novel improved accelerated particle swarm optimization algorithm for global numerical optimization. Eng Comput 31(7):1198–1220. https://doi.org/10.1108/EC-10-2012-0232

    Article  Google Scholar 

  50. Warid W (2020) Optimal power flow using the AMTPG-Jaya algorithm. Appl Soft Comput 91:106252

    Article  Google Scholar 

  51. Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82

    Article  Google Scholar 

  52. Wu Z, Kang L (2003) A fast and elitist parallel evolutionary algorithm for solving systems of non-linear equations. Proc Congr Evolut Comput 2:1026–1028

    Google Scholar 

  53. Wu J, Cui Z, Liu J (2011) Using hybrid social emotional optimization algorithm with metropolis rule to solve nonlinear equations. In: IEEE 10th International conference on cognitive informatics and cognitive computing (ICCI-CC'11), p 405-411. IEEE

  54. Xi M, Sun J, Xu W (2008) An improved quantum-behaved particle swarm optimization algorithm with weighted mean best position. Appl Math Comput 205(2):751–759

    MATH  Google Scholar 

  55. Yang XS (2010) Engineering optimisation: an introduction with metaheuristic applications. Wiley, New York

    Google Scholar 

  56. Yuan G, Lu X (2008) A new backtracking inexact BFGS method for symmetric nonlinear equations. Comput Math Appl 55(1):116–129

    MathSciNet  MATH  Article  Google Scholar 

  57. Zouache D, Nouioua F, Moussaoui A (2016) Quantum-inspired firefly algorithm with particle swarm optimization for discrete optimization problems. Soft Comput 20(7):2781–2799

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Rizk M. Rizk-Allah.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Rizk-Allah, R.M. A quantum-based sine cosine algorithm for solving general systems of nonlinear equations. Artif Intell Rev (2021). https://doi.org/10.1007/s10462-020-09944-0

Download citation

Keywords

  • Sine cosine algorithm
  • Quantum strategy
  • Systems of nonlinear equations
  • Power system applications