1 Introduction

Optimization algorithms are used in many fields involving engineering problems. In recent years, the concepts of efficiency and speed have become even more important for applications that are important in terms of both transaction and time cost, such as data mining or image processing. Complex problems with higher dimensions, more variables and constraints have emerged. Different solution approaches are followed in line with the needs of the specific applications. Solving engineering problems using traditional numerical solving techniques is inefficient and time consuming. In cases where it is not possible to calculate the solution set analytically within an acceptable timeframe, metaheuristic optimization algorithms come into play. These algorithms do not guarantee the best results, but they produce near-best solutions in a reasonable amount of time.

Especially in the 90 s, the articles published in this field have pioneered many publications and hundreds of optimization algorithms today. Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Tabu Search are among the most well-known algorithms [1,2,3]. Many studies have been carried out in this area and have been grouped according to the areas inspired by the algorithm. They are classified as evolutionary, physics-based, swarm-based and human-based algorithms [4]. Evolutionary algorithms are inspired by the phenomenon of evolution. The better candidate solutions are grouped together in order to form the next generation of possible solutions. Consequently, new generation solution sets provide more accurate results than older ones. Genetic algorithm is a well-known optimization algorithm falling under this category. Genetic algorithm includes genetic items and events such as chromosomes, crossover, and mutation. Physics-based algorithms are inspired by the physic rules such as AOA [5] that is developed simulating the Archimedes’ Principle. Human-based algorithms are inspired by human behaviors. Mother Optimization Algorithm (MOA) is an algorithm based on the interaction of a mother with her children [6]. Swarm-based algorithms are inspired by the biologic and social behavior of various animals in order to solve complex problems. These algorithms mimic a swarm based animals’ prey searching or mating behavior in a computational optimization method. PSO, Salp Swarm Algorithm (SSA), and Whale Optimization Algorithm (WOA) are examples of swarm based algorithms [4, 7].

In the last decade, optimization algorithms have improved and new algorithms are proposed providing intuitive solution approaches and better resolutions to problems. For instance, Honey Badger Algorithm (HBA) is a population-based optimizer which is inspired by behavior of honey badger [8]. Sine Cosine Algorithm (SCA) tries to find the best solution using sine and cosine functions [9]. In study [10] Butterfly Optimization Algorithm (BOA) is introduced. This algorithm mimics the mating and food searching behavior of butterflies. In study [11] a nature-inspired optimization is proposed called Golden Jackal Optimization (GJO). GJO is particularly driven by the collective hunting habits of golden jackals which are mainly searching, surrounding and capturing the prey. Whale Optimization Algorithm (WOA) is another swarm-based metaheuristic algorithm [4]. WOA is developed with inspiration from the hunting method of whales. Fennec Fox Optimization (FFA), is inspired by digging and escape behavior of fennec fox [12]. In [13] Mutated Leader Algorithm (MLA) is proposed where initial random solutions are updated by a mutated leader. Two-stage Optimization (TSO) is an algorithm that updates population members based on the good members of the population [14]. Activities such as random walks are used in the algorithm to provide diversity in population. Salp Swarm Algorithm (SSA) is a swarm-based algorithm proposed in [7]. It is inspired by navigation and hunting behaviors of salps. Atomic Orbital Search (AOS) is a physics-based algorithm utilizing laws of quantum-based atomic theory [15].

According to a theorem, (no free lunch—NFL), a specific optimization algorithm cannot provide best results over a range of problems [16]. Therefore, in recent years, numerous new optimization algorithms are published and hybrid or modified algorithms are proposed to obtain better results. There are many studies on providing a hybrid solution to problems. For instance PSO is used along with Seeker Optimization Algorithm (SOA) to develop a new hybrid algorithm called SOAPSO [17]. It is tested on various benchmark functions. In literature [18] a hybrid optimization algorithm is formed with WOA and Modified Differential Evolution (MDE). Study aims to improve areas of the local optimum, population diversity, and early convergence of WOA. Another hybrid algorithm is HPSOBOA which combines PSO with BOA [19]. It uses both algorithms in order to obtain superior results.

In research [20] authors introduce a new approach using trigonometric operators to improve the exploitation phase of the original AOA method. Sine and cosine functions are used to avoid local optima. In studies [21, 22] TLBO is modified to improve solutions and accelerate convergence speed. In the first study, modification is done by altering updating mechanism of a single solution. In the second study, population individuals group mechanisms are presented into phases. Moreover, in another study, Harris Hawks Optimization Algorithm (HHO) is modified [23]. In the modified version, various update strategies are introduced. In study [24] Moth Flame Optimization (MFO) algorithm is modified to avoid cases of local optima and early convergence. Modification is done by utilizing a modified dynamic opposite learning (DOL) strategy which aims to find a better solution by determining a quasi-opposite number. A modified version of BOA is proposed in [25]. Algorithm parameters such as the switching probability, power of exponent (a), stimulus intensity (I), and sensor modality (c) are modified in search of better working efficiency of BOA. The Arithmetic Optimization Algorithm is a recently proposed study inspired by the arithmetic operators. This algorithm is modified by incorporating an operator for opposition-based learning (OBL) and a constant parameter [26]. Similarly, in studies [27,28,29,30,31,32,33,34], algorithms are hybridized or improved in order to obtain better results for various fields of studies such as optimal power flow, mobile robot path planning, and centrifugal pump optimization.

The modification process can be carried out by adjusting the coefficients in the algorithm or by changing the parts of the structure of an algorithm. Modification is focused on obtaining better results while considering the performance of the algorithm. Although there are many studies on the improvement of optimization algorithms, problem-oriented improvements are anticipated to be achievable. Studies have been carried out on modifying the algorithm around a specific problem and relatively better results are obtained [35,36,37,38].

This study aims to increase the performance of AOA in a wide range of problems. AOA features simplicity, scalability, and few control options. In addition, it has been evaluated on complex test functions and better results have been achieved when compared to other algorithms in the study [5]. Furthermore, it has an efficient and robust structure with regard to exploration and exploitation balance. At the exploration stage of the search, new solutions are searched in the unknown areas. At the exploitation stage, algorithms search solutions already found in the neighborhood. This way fitness value improves and more accurate solutions are acquired. The balance between the two stages significantly improves the success of the algorithm.

Therefore, to further increase the effectiveness of AOA, part of the algorithm and parameters were modified to calculate the problem-specific coefficients. The candidate positions of objects are optimized using the DL strategy given in [39, 40]. In addition, another metaheuristic algorithm, HBA, was used to calculate the coefficients. In other words, optimization algorithm is optimized by another algorithm.

A summary of contributions of this study is:

  • MDAOA, a modified version of AOA, is developed which provides better results on a wide range of problem functions.

  • Modification is applied with a two-step process: optimizing the candidate positions of objects using the dimension learning-based strategy and modifying predetermined five parameters used in the original AOA. Parameters are optimized with a different optimization algorithm, namely HBA, in order to solve a specific engineering problem.

  • Avoiding early convergence and improving balance between exploration and exploitation phases of AOA are accomplished.

  • The proposed modified algorithm, MDAOA, is tested on four groups of problem functions: standard benchmark functions, CEC 2017 test suite, engineering problems, and optimal placement of EVCSs on IEEE-33 distribution system. Results indicate that the Modified AOA (MDAOA) algorithm can produce better results than other well-known algorithms by calculating the problem-specific parameters.

The rest of the paper is organized as: Sect. 2 presents AOA before applying modification steps, and HBA which is used in the modification process. Section 3 describes the modification of AOA. In Sect. 4, the simulation results are presented in detail. The conclusion based on the results is presented in Sect. 5.

2 Optimization algorithms

2.1 Archimedes optimization algorithm

AOA is an optimization algorithm inspired by the Archimedes’ Principle [5]. An object is immersed in a liquid and pushed up by a buoyancy force. This force is equivalent to the mass of the displaced liquid. According to this approach, every object immersed in the liquid tries to be at the equilibrium state. In this state, the buoyant force and the weight of the object are equal. This condition is given in Eq. 1. Equations 113 are taken from the reference [5].

$$F_{b} = w_{o } ; p_{b} v_{b} a_{b} = p_{o} v_{o} a_{o}$$
(1)

where \(v\) is the volume, \(p\) is the density, \(b\) indicates fluid, \(o\) indicates immersed object, and \(a\) is the acceleration. The speed in the liquid is determined by the volume and weight of the objects.

In AOA, submerged objects generate a population. Initial search is performed with random values which is a common practice in most optimization algorithms. For every iteration, the values of density and volume are updated until the algorithm’s ending criteria are fulfilled. Steps implemented in AOA can be listed as:

Step 1 Values of the objects are randomly assigned as in Eq. 2.

$$O_{i} = lb_{i } \times rand \times \left( {ub_{i} - lb_{i} } \right)\quad i = 1,2, \ldots ,N$$
(2)

where N is population, \(O_{i}\) is the ith object in N, \(lb_{i}\) is the lower bound and \(ub_{i}\) is the upper bound. Volume (vol) and density (den) values are randomly initialized as in Eq. 3. Acceleration (acci) is initialized in Eq. 4 [5].

$$den_{i} = rand;vol_{i} = rand$$
(3)
$$acc_{i} = lb_{i} + rand \times \left( {ub_{i} - lb_{i} } \right)$$
(4)

Step 2 Density and volume are updated by Eq. 5.

$$den_{i}^{t + 1} = den_{i}^{t} + rand \times \left( {den_{best} - den_{i}^{t} } \right);vol_{i}^{t + 1} = vol_{i}^{t + 1} + rand \times \left( {vol_{best} - vol_{i}^{t} } \right)$$
(5)

where volbest and denbest are the best volume and density values.

Step 3 The transfer operator (TF) is increased, on the other hand, the density factor is decreased. This enables the changeover between phases (exploration–exploitation) with equilibrium state after the collisions. This is accomplished by the Eq. 6.

$${\text{TF}} = \exp \left( {\frac{{t - t_{\max } }}{{t_{\max } }}} \right)$$
(6)

where \(t\) indicates the iteration number. \(t_{\max }\) is the maximum number of iterations. Density decreasing factor (d) decreases over time using Eq. 7:

$$d^{t + 1} = \exp \left( {\frac{{t_{\max } - t}}{{t_{\max } }}} \right) - \left( {\frac{t}{{t_{\max } }}} \right)$$
(7)

Step 4 Exploration phase: In this step, collisions occur according to the TF value. Object’s acceleration (acci) is updated by Eq. 8.

$$acc_{i}^{t + 1} = \frac{{den_{mr} + vol_{mr} \times acc_{mr} }}{{den_{i}^{t + 1} \times vol_{i}^{t + 1} }}$$
(8)

where \(den_{i}\) stands for density and \(vol_{i}\) is volume. \(acc_{i}\) indicates acceleration of object i, mr indicates values of random material.

Step 5 Exploitation phase: Depending on the TF value, the collision does not take place. In this case, object’s acceleration is updated by Eq. 9.

$$acc_{i}^{t + 1} = \frac{{den_{best} + vol_{best} \times acc_{best} }}{{den_{i}^{t + 1} \times vol_{i}^{t + 1} }}$$
(9)

where accbest is the best acceleration value.

Step 6 Normalize acceleration step: In this step, acceleration is excessive in circumstances where the solution is far from the global minimum and decreases over time in other cases. Therefore, \(acc_{{i,norm{ }}}^{{t + 1{ }}}\) adjusts the change of step size for each object. For this, Eq. 10 is used.

$$acc_{i - norm}^{t + 1} = u \times \frac{{acc_{i}^{t + 1} - \min \left( {acc} \right)}}{{max\left( {acc} \right) - {\text{min}}\left( {acc} \right)}} + l$$
(10)

where \(u\) and \(l\) are the upper and lower values.

Step 7 Update step. In this step, positions are updated.

If TF less than or equal to 0.5 (exploration phase) Eq. 11 is used.

$$x_{i}^{t + 1} = x_{i}^{t} + C_{1} \times rand \times acc_{i - norm}^{t + 1} \times d \times \left( {x_{rand} - x_{i}^{t} } \right)$$
(11)

where C1 is equal to 2. If TF is greater than 0.5, exploitation phase is executed. Object positions are updated using Eq. 12.

$$x_{i}^{t + 1} = x_{best}^{t} + F { } \times { } C_{2} { } \times { }rand{ } \times { } acc_{i - norm}^{t + 1} { } \times { }d{ } \times { }\left( {T \times x_{best} - x_{i}^{t} } \right)$$
(12)

where C2 = 6. T = C3 × TF. The value of T increases with time in the range of [C3 × 0.3, 1]. \(F\) indicates the flag parameter used for altering the direction Eq. 13:

$$F = \left\{ {\begin{array}{*{20}l} { + 1} \hfill & { {\text{if}}\; P \le 0.5} \hfill \\ { - 1} \hfill & {{\text{if}}\;P > 0.5} \hfill \\ \end{array} } \right.$$
(13)

where \(P = 2 \times rand - C_{4}\).

Step 8 Evaluation step. In this step, the fitness function is evaluated. If a better result is found, it is remembered.

2.2 Honey Badger algorithm

Five constant values in AOA are optimized using HBA. It is a well performing algorithm tested on standard benchmark functions, engineering problems, and CEC 2017 benchmark functions. Results indicate that it can be effective in solving complex problems. In addition, HBA performs well in terms of balance of exploration and exploitation phases as well as convergence speed. As a result, HBA is employed in order to find the constant parameters of AOA.

HBA was inspired by the foraging behavior of the honey badger [8]. Food source or prey is located in two ways: smelling and digging or by following a bird that guides the honey badger to a source of honey. The first phase is digging where the prey’s rough location is established through smelling. Next, an appropriate location is selected for digging. The second phase is honey mode. In this phase, the honey guide bird is tracked in order to locate the source of honey. The pseudo code of HBA applied in the study is given in Algorithm 1 [8].

Algorithm 1
figure a

The pseudo code of HBA applied in the study

The system parameters are specified, and the initial positions are randomly determined. The population of honey badgers are represented in the matrix below [8].

$${\text{Candidate}}\;{\text{ solutions}} = { }\left[ {\begin{array}{*{20}c} {x_{11} } & {x_{12} } & \cdots & {x_{1d} } \\ \vdots & \vdots & \vdots & \vdots \\ {x_{n1} } & {x_{n2} } & \cdots & {x_{nd} } \\ \end{array} } \right]$$
$$ith\;{\text{position }}\;{\text{of}}\;{\text{ honey}}\;{\text{ badger}}\; x_{i} = { }\left[ {\begin{array}{*{20}c} {x_{i}^{1} , x_{i}^{2} , } & \cdots & {x_{i}^{d} } \\ \end{array} } \right]{ }$$

Fitness evaluation results are stored in the matrix using Eq. 14. Equations 1419 are obtained from reference [8]. ri is a random number between 0 and 1 where i = 1,..,n, and n = 7.

$$x_{i} = lb_{i } + r_{1} \times \left( {ub_{i} - lb_{i} } \right)$$
(14)

where \(N\) is population, \(x_{i}\) is the \(ith\) honey badger position.

The fitness function is calculated for each honey badger. The best position of xprey is remembered and fitness value is assigned to fprey. Afterward, Ii, the smell intensity, is calculated using Eq. (15).

$$I_{i} = r_{2} \times { }\frac{S}{{4 \pi d_{i}^{2} }}$$
$$S = \left( {x_{i} - x_{i + 1} } \right)^{2}$$
(15)
$$d_{i} = x_{prey} - x{ }$$

Decreasing factor (α) is updated using Eq. (16).

$$\alpha = C \times {\text{exp}}\left( {\frac{ - t}{{t_{max} }}} \right)$$
(16)

The positions of the xnew are updated by either Eq. 17 or Eq. 19 depending on a random number.

$$x_{new} = x_{{prey{ }}} + F x \beta \times I \times x_{{prey{ }}} + F \times r_{3} \times \alpha \times d_{{\text{i}}} \times \left| {\cos \left( {2\pi r_{4} } \right) \times \left[ {1 - cos\left( {2\pi r_{5} } \right)} \right]} \right|$$
(17)

\(F\) is the flag which changes direction. Its value is determined by Eq. 18.

$$F = \left\{ {\begin{array}{*{20}l} 1 \hfill & {{\text{if}}\; r_{6} \le 0.5} \hfill \\ { - 1} \hfill & {{\text{else}}} \hfill \\ \end{array} } \right.$$
(18)

Honey phase is the second part of Step 5. Equation 19 simulates the condition where the honey guide bird is followed to find honey.

$$x_{new} = x_{{prey{ }}} + F \times { }r_{7} \times \alpha \times { }d_{i}$$
(19)

where \(x_{new}\) is the updated position of honey badger. \(x_{{\begin{array}{*{20}c} {prey } \\ {} \\ \end{array} }}\) indicates the location of food/prey.

Updated positions are evaluated and assigned to fnew. The steps are repeated until the ending criteria are met.

3 Modifying Archimedes optimization algorithm

In [5], AOA is compared with recent and state-of-the-art optimization algorithms. AOA can provide very good solutions to the standard, real-world engineering as well as CEC benchmark functions. However, carefully analyzing and testing through benchmark functions, it has been observed that population diversity can be increased leading to an improved balance between exploitation and exploration and more precise set of solutions. In order to achieve modification, two stages are applied: optimizing the candidate positions of objects using the dimension learning-based strategy and predetermined five parameters used in the original AOA with a different optimization algorithm, namely HBA, in order to solve a specific engineering problem.

3.1 Stage 1: applying dimension learning-based steps

In this stage, each iteration has two strategies to update the object’s position to a better position: DL strategy and standard AOA search strategies similar to the work in [35].

In original AOA, Steps 4 through 7 given in Sect. 2 are used for balancing exploration and exploitation phases. By applying this stage to the AOA, balance of exploration and exploitation is improved.

The DL strategy uses a distinctive methodology to establish neighborhood for each object. According to this approach, neighborhood data can be conveyed among objects. DL strategy consists of four phases.

3.1.1 Initiation step

\(N\) is the population of objects. They are distributed randomly by Eq. 20.

$$X_{ij} = l_{j} + frnd_{j} \left[ {0, 1} \right] \times \left( {u_{j} - l_{j} } \right), i \in \left[ {1, \ldots , N} \right], j \in \left[ {1, \ldots , D} \right]$$
(20)

where D is dimension. Xi(t) represents the position of ith immersed object for iteration \(t\)frnd indicates F distribution.

3.1.2 Movement/transfer step

In dimension learning strategy, objects are relocated by surrounding objects in order to be a new location candidate for Xi(t).

Objects’ new position dimensions are computed by Eq. 23. First, radius between the current and candidate positions are computed by Eq. 21. [39, 40].

$${ }Rad_{i} \left( t \right) = \left\| {x_{i} \left( t \right) - x_{i - AOA} \left( {t + 1} \right)} \right\|$$
(21)

Afterward, object’s neighbors are computed by Eq. 22.

$$N_{i} \left( t \right) = \{ X_{j} \left( t \right) | D_{i} \left( { X_{i} \left( t \right),X_{j} \left( t \right) } \right) \le Rad_{i} \left( t \right), X_{j} \left( t \right) \in population \}$$
(22)

where Ni(t) is matrix containing the neighbors of Xi(t).

3.1.3 Selecting/and updating step

Neighbor relocating is performed by Eq. 23

$$X_{i - DL,d} \left( {t + 1} \right) = X_{i,d} \left( t \right) + frndx\left( {X_{n,d} \left( t \right){ } - { }X_{r,d} \left( t \right)} \right)$$
(23)

where \(X_{n,d} \left( t \right)\) is a random neighbor \(\in N_{i} \left( t \right)\). \(X_{r,d} \left( t \right)\) is a random object selected population.

The fitness values of Xi−AOA(t + 1) and Xi−DL,d(t + 1) are compared. The former and latter locations are compared and updated with Eq. 24. [40].

$$X_{i} \left( {t + 1} \right) = \left\{ {\begin{array}{*{20}l} {X_{i - AOA} \left( {t + 1} \right),} \hfill & { {\text{if}}\;fobject\left( {X_{i - AOA} } \right) < fobject\left( {X_{i - DL} } \right) } \hfill \\ {X_{i - DL} \left( {t + 1} \right) } \hfill & {{\text{otherwise}}} \hfill \\ \end{array} } \right.$$
(24)

3.1.4 Termination step

This process is repeated for all iterations.

3.2 Stage 2: updating parameters

In this stage, the C3, C4 values in the AOA algorithm take different values for the use of CEC 2017 and Engineering problems and Standard Optimization Functions. In the AOA article, sensitivity analysis was performed by selecting three CEC 2017 test functions. Partial cost function values obtained by changing the constant parameters are given in Table 1.

Table 1 Sensitivity analysis for the parameter values [5]

It is stated in AOA article that different values could be tried depending on the difficulty and landscape of the problem [5]. Implementing algorithms with default parameters may perform well, however, fine tuning parameters for a specific problem returns better solutions. For example, in [35] the structure of AOA is modified for solving the optimal power flow problem on three different power systems. Results of implementing the modified algorithm show effectiveness obtained by the modification. Therefore, one or more parameters of an algorithm or part of its structure can be modified in order to increase effectiveness of outcome. These parameters could be optimized by another optimization algorithm. This may result in a high amount of computational cost because the search domain is very wide and optimization algorithms need to change and try new values constantly to fine tune parameters. Since, this part is only required once for each type of problem, running the modified algorithm afterward will not differ in terms of complexity and computational time. After the new parameters are found, a new modified algorithm is executed. The pseudo code of the proposed MDAOA is given in Algorithm 2.

Algorithm 2
figure b

The pseudo code of the proposed MDAOA

In addition to C3 and C4, three fixed probability values are optimized. Probability values are as follows:

  • p1 is the probability compared with Transfer Operator (TF) parameter,

  • p2 is the probability compared with the flag (F) for changing the search direction,

  • p3 is used in Step 5 of AOA to compare TF value to determine either exploration or exploitation phases.

These probabilities are optimized by HBA considering the lower and upper boundary limits. Second and third columns in Table 2 indicates the original values of AOA parameters.

Table 2 Original [5] values of C3, C4, p1, p2, p3

The flowchart of the proposed MDAOA is given in Fig. 1.

Fig. 1
figure 1

Flowchart of the proposed MDAOA

4 Experimental results

The effectiveness of the MDAOA algorithm is evaluated in four groups of functions. Standard benchmark functions, CEC 2017 test suite, five engineering problems and optimal placement of EVCSs into the IEEE-33 bus distribution system.

In addition to AOA and Modified AOA, the formulated objective function is run with recent optimization algorithms with proven effectiveness using various benchmark functions (CEC, engineering and standard). Algorithms used for comparison are: Honey Badger Algorithm (HBA-2021), Sine Cosine Algorithm (SCA-2016), Butterfly Optimization Algorithm (BOA-2019), Particle Swarm Optimization Butterfly Optimization Algorithm (PSOBOA-2020), Golden Jackal Optimization (GJO-2022), Whale Optimization Algorithm (WOA-2016), Ant Lion Optimizer (ALO-2015), Salp Swarm Algorithm (SSA-2017), Atomic Orbital Search (AOS-2021) [4, 7,8,9,10,11, 15, 19, 41].

All algorithm evaluations are applied for 30 runs to provide sufficient consistency (tmax = 1000). Table 3 shows parameters for algorithms used in comparison.

Table 3 Parameters for algorithms used in comparison

The Matlab programming of optimization algorithms was obtained from Matlab File Exchange. Comparisons of algorithms are executed using MATLAB R2019 version, on a Microsoft Windows 10 operating system environment.

4.1 Standard benchmark functions

The group of standard benchmark functions is the first of the four test function groups. Unimodal, multimodal and fixed dimensional functions are used to evaluate in multiple aspects. The function parameters of selected standard benchmark functions are shown in Table 4.

Table 4 Function parameters of selected standard benchmark functions [8]

Convergence curve graphs of 13 standard benchmark functions are presented in Fig. 2 which shows fitness values and number of iterations. Results of 30 runs with maximum iteration of 1000 are presented in Table 5. Bold numbers indicate the minimum values. Results indicate that MDAOA provides best results on all of the standard benchmark functions. In addition, the average standard deviation values are consistently low and better on most functions when compared to other algorithms in the comparison group.

Fig. 2
figure 2figure 2figure 2

Convergence curves of standard benchmark functions

Table 5 Comparison of results of 13 standard benchmark functions

To conclude comparison information from the solution sets in the study, two hypotheses are defined: the null hypothesis H0 and the alternative hypothesis H1. H0 indicates that medians errors of compared algorithms are identical and equal to zero while H1 suggests at least one of the medians errors of compared algorithms is not identical from the others and different from zero. A level of statistical significance (α) is a threshold value defined in order to decide whether or not to accept or reject the hypothesis. The level of significance in this study is considered to be 0.05.

In order to determine the most appropriate method to compare the proposed algorithm with the other algorithms statistically, a normality test is performed. For this purpose, Shapiro–Wilk (SW) test is used. Based on the results, the Friedman test is utilized to compare the algorithms errors which are produced in every iteration similar to the studies in [8, 19, 24, 34]. The Friedman test is a nonparametric statistical test where errors of algorithms are compared in order to check if there is a statistical significance [42]. Mean ranks are presented in Table 5. Next, Bonferroni corrected Wilcoxon rank-sum test is employed as post hoc test for making pairwise comparisons of the algorithms similar to the studies in [6, 12, 19, 24]. In other words, two solutions sets are compared based on the median values which are statistically significant. If the p-value is lower than the level of significance which is set at 0.05, algorithms output statistically significant results. P-values computed by the Bonferroni corrected Wilcoxon rank-sum test are presented in Table 5.

According to the table, MDAOA returns best results on all functions. MDAOA is the only function to provide the minimum results on five functions and shares best results on the remaining eight function with consistently low standard deviation values. According to Table 5, MDAOA has the lowest Friedman mean rank and ranks first in optimizing standard benchmark functions.

4.2 Competitions on evolutionary computation 2017

Competitions on Evolutionary Computation 2017 (CEC 2017) is a test bed which provides an environment to test the performance of unconstrained numerical optimization algorithms. The CEC 2017 includes 29 benchmark functions, (f2 is excluded) among which are unimodal, multimodal, hybrid, and composition functions. In these benchmark functions, optimization algorithms’ performance on avoiding local minima, exploitation and exploration performance are evaluated. Search range for functions is [− 100, 100]. The dimension is 30. Summary of the CEC 2017 test functions are presented in Table 6.

Table 6 Summary of the CEC 2017 Test Functions [43]

In this group, 29 CEC 2017 benchmark functions are evaluated. Convergence curve graphs indicating minimum fitness values and number of iterations are presented in Fig. 3. Results including mean, median and standard deviation of 30 runs with maximum iteration of 1000 are presented in Table 7. MDAOA returned optimum results for 26 of the total 29 benchmark functions. Consequently, MDAOA can perform well on a wide range of problems.

Fig. 3
figure 3figure 3figure 3figure 3figure 3

Convergence curves of CEC 2017 benchmark functions

Table 7 Comparison of results of 29 CEC 2017 benchmark functions

4.3 Optimal placement of EVCS in the distribution network

Placement of EVCSs is critical due to the adverse effects such as deterioration of the voltage profile, increase of active power losses, instant load peaks, overloading of transmission lines and transformers which may take place in the absence of planned deployment. Thus, in this study, EVCSs are aimed to be placed at the best locations (specific buses) that minimize the effect of EVCSs in the distribution network as much as possible. This placement is performed based on an index that consists of power loss, voltage deviation, and voltage stability index (VSI-the ability of a system to return to normal operating condition after a disturbance) solved using standard AOA and the modified MDAOA. The results include the EVCSs locations in the appropriate buses of the distribution network.

4.3.1 Objective function

The multi-objective function given in Eq. 25 is used in the EVCS placement problem.

$$min\left\{ {{ }w_{1} {*}f_{1} + { }w_{2} {*}f_{2} + { }\frac{{w_{3} }}{{f_{3} }}{ }} \right\}$$
(25)

Here, w1, w2 and w3 are weight factors and represent the coefficients of the f1, f2, and f3 functions. The objective function \(f_{1}\), calculated by Eq. 26, is used to minimize the power loss value.

$$f_{1} = min\left\{ {\mathop \sum \limits_{i = 1}^{{n_{branch} }} I_{i}^{2} { }R_{i} } \right\}$$
(26)

The objective function f2, calculated by Eq. 27 is used for minimum voltage deviation value.

$$f_{2} = { }min\left\{ {\mathop \sum \limits_{i = 1}^{{n_{max} }} \left( {1 - { }V_{i} } \right){ }^{2} { *}100{* }MVAb} \right\}$$
(27)

The value of VSI is preferred to be greater than 0. However, the higher this value, the better the stability of the system would be. In order to calculate the VSI values, the formula in reference [44] is used. The objective function that calculates the VSI values is presented in Eq. 28.

$$VSI = max\left\{ {2V_{k}^{2} V_{k + 1}^{2} - 2V_{k + 1}^{2} \left( {P_{k + 1} r + { }Q_{k + 1} x} \right) - { }\left| Z \right|^{2} \left( {P_{k + 1}^{2} + Q_{k + 1}^{2} } \right){ }} \right\}$$
(28)

The lowest single VSI value among the entire VSI values, represents the weakest link in terms of system stability. The lowest VSI value is found by the Eq. 29 and w3 in Eq. 25 is divided by this value.

$$f_{3} = min\left\{ { VSI } \right\}$$
(29)

4.3.2 Constraints

The following constraints (Eq. 30) are used to secure optimal power flow including minimum voltage, power and voltage stability of the distribution network.

$$\begin{aligned} & 0 < VSI_{i} \quad i = 1,2, \ldots N \\ & Active\,power\,P_{i}^{min} \le P_{i} \le P_{i}^{max} \quad i = 1,2, \ldots N \\ & Reactive\,power\,Q_{i}^{min} \le Q_{i} \le Q_{i}^{max} \quad i = 1,2, \ldots N \\ & Bus\,voltage\,V_{i}^{min} \le \left| {V_{i} } \right| \le V_{i}^{max} \quad i = 1,2, \ldots N \\ \end{aligned}$$
(30)

where N is the number of buses. \(P_{i}\) is active power of \(ith\) bus. \(Q_{i} :\) is reactive power of \(ith\) bus. \(V_{i}\): is voltage of \(ith\) bus.

Figure 4 shows the comparison of convergence curve graphs of 11 optimization algorithms. Numerical results are presented in Table 8. Results show that compared to other techniques, MDAOA has achieved successful results with lowest average standard deviation values. This shows that MDAOA can be used in the EVCS placement problem, which is an important and challenging issue in power system engineering.

Fig. 4
figure 4

Convergence curves of EVCS Placement Problem

Table 8 Comparison of results of the EVCS Placement Problem

4.4 Constrained engineering design problems

Validity and efficiency of the proposed MDAOA is evaluated through five real life constrained engineering problems. The problems are highly complex with multiple design variables and constraints. However, MDAOA returned optimum results on all of the five engineering problems. Table 9 shows the list of engineering design problems with relative parameters.

Table 9 Problem parameters

4.4.1 Tension/compression spring design

The tension/compression spring design problem is used as a benchmark function. It is an optimization problem aiming to minimize the cost of a spring with three variables. These are: the number of active coils (N), wire diameter (d), and the diameter of coil (D). The problem has four constraints requiring deflection, stress and surge frequency. The figure of the problem is shown in Fig. 5.

Fig. 5
figure 5

Tension/compression spring design

The problem is mathematically formulated as:

$$x = \left[ {x_{1} , x_{2} , x_{3} } \right] = \left[ {d, D, N} \right]$$
(31)
$$f\left( x \right) = \left( {x_{3} + 2} \right)x_{2} x_{1}^{2}$$
(32)

Constraints:

$$\begin{aligned} g_{1} \left( x \right) & = 1 - \frac{{x_{2}^{3} x_{3} }}{{71785x_{1}^{4} }}{ } \le 0, \\ g_{2} \left( x \right) & = \frac{{4x_{2}^{2} - x_{1} x_{2} }}{{12566\left( {x_{2} x_{1}^{3} - x_{1}^{4} } \right)}}{ } + { }\frac{1}{{5108x_{1}^{2} }}{ } - 1{ } \le 0, \\ g_{3} \left( x \right) & = 1 - { }\frac{{140.45x_{1} }}{{x_{2}^{2} x_{3} }}{ } \le 0, \\ g_{4} \left( x \right) & = \frac{{x_{1} + x_{2} }}{1.5}{ } - 1{ } \le 0, \\ \end{aligned}$$
(33)

Range of variables

$$0.05 \le x_{1} \le 2.00, 0.25 \le x_{2} \le 1.30, 2.00 \le x_{3} \le 15.00$$

Comparative convergence curves are presented in Fig. 6. The numerical results are shown in Table 10. Results demonstrate that solutions calculated by MDAOA and GJO are superior in the comparison group.

Fig. 6
figure 6

Convergence curves of tension/compression spring design engineering problem

Table 10 Comparison of results of five engineering problems

4.4.2 Pressure vessel design

The pressure vessel design problem is an optimization problem aiming to minimize the manufacturing cost of a cylindrical pressure vessel. The design and the parameters of the optimization are shown in Fig. 7. These are: shell thickness (Ts), inner radius (R), length of the cylindrical section (L) (excluding head) and head thickness (Th).

Fig. 7
figure 7

Pressure vessel design problem

The mathematical formulation of the problem is:

$$x = \left[ {x_{1} , x_{2} , x_{3} , x_{4} } \right] = \left[ {T_{s} , T_{h} ,R, L} \right]$$
(34)
$$f\left( x \right) = 0.6224x_{1} x_{3} x_{4 } + 1.7781x_{2} x_{3}^{2} + 3.1661x_{1}^{2} x_{4} + 19.84{ }x_{1}^{2} x_{3}$$
(35)

Constraints:

$$\begin{aligned} g_{1} \left( x \right) & = - x_{1} + 0.0193x_{3} { } \le 0, \\ g_{2} \left( x \right) & = - x_{2} + 0.00954x_{3} \le 0, \\ g_{3} \left( x \right) & = -{\uppi }x_{3}^{2} x_{4} - { }\frac{4}{3}{\uppi }x_{3}^{3} + 1,296,000{ } \le 0, \\ g_{4} \left( x \right) & = x_{4} - 240 \le 0, \\ \end{aligned}$$
(36)

Range of variables

$$0 \le x_{1} \le 99, 0 \le x_{2} \le 99,\quad 10 \le x_{3} \le 200, 10 \le x_{4} \le 200$$

Convergence curves are presented in Fig. 8. The comparative results are shown in Table 10. Results show that MDAOA outperforms other algorithms in the comparison group.

Fig. 8
figure 8

Convergence curves of Pressure vessel design problem

4.4.3 Welded beam design

Welded beam design problem is a cost minimization optimization problem of manufacturing a welded beam shown in Fig. 9. Cost function includes four decision variables: weld thickness (h), bar thickness (b), length of the attached section of the bar (l), and the bar’s height (t). There are seven constraints some of which are shear stress (s), bucking load on the bar (Pc) and bending stress (θ). Mathematical formulation of the problem is:

Fig. 9
figure 9

Schematic of welded beam design problem

The problem is mathematically formulated as:

$$x = \left[ {x_{1} , x_{2} , x_{3} , x_{4} } \right] = \left[ {h, l,t, b} \right]$$
(37)
$$f\left( x \right) = 1.10471x_{1}^{2} x_{2} + 0.04811x_{3} x_{4 } \left( {14 + x_{2} } \right)$$
(38)

Constraints:

$$\begin{aligned} g_{1} \left( x \right) & = \tau \left( x \right) - \tau_{max} { } \le 0, \\ g_{2} \left( x \right) & = \sigma \left( x \right) - \sigma_{max} \le 0, \\ g_{3} \left( x \right) & = x_{1} - x_{4} { } \le 0, \\ g_{4} \left( x \right) & = 1.10471x_{1}^{2} + 0.04811x_{3} x_{4 } \left( {14 + x_{2} } \right) - 5 \le 0, \\ g_{5} \left( x \right) & = 0.125 - x_{1} { } \le 0, \\ g_{6} \left( x \right) & = \delta \left( x \right) - \delta_{max} \le 0, \\ g_{7} \left( x \right) & = P - P_{c} \left( x \right){ } \le 0, \\ \end{aligned}$$
(39)

Range of variables

$$0.1 \le x_{1} \le 2, 0.1 \le x_{2} \le 10, 0.1 \le x_{3} \le 10, 0.1 \le x_{4} \le 2$$

where

$$\tau(x)= \sqrt{(\tau')^2+2\tau'\tau''\frac{{\text{x}}_2}{{\text{2R}}}+(\tau'')^2} ; \tau'=\frac{{\text{P}}}{\sqrt{2}}x_{1}x_{2}; \tau''=\frac{{\text{MR}}}{{\text{J}}}$$
$$R = \sqrt {\frac{{x_{2}^{2} }}{4} + \left( {\frac{{x_{1} + x_{3} }}{2}} \right)^{2} } ; M = P{ }\left( {L + \frac{{x_{2} }}{2}} \right){ }$$
$$J = 2{ }\left\{ {\sqrt 2 { }x_{1} x_{{2{ }}} \left[ {\frac{{x_{2}^{2} }}{12} + \left( {\frac{{x_{1} + x_{3} }}{2}} \right)^{2} } \right]} \right\}$$
$$\sigma \left( x \right) = { }\frac{6PL}{{x_{4} x_{3}^{2} }}$$
$$\delta \left( x \right) = \frac{{4PL^{3} }}{{Ex_{3}^{3} x_{4} }}$$
$$P_{c} \left( x \right) = \frac{{4.013E\sqrt {\frac{{x_{3}^{2} x_{4}^{6} }}{36}} }}{{L^{2} }} \left( {1 - \frac{{x_{3} }}{2L} \sqrt{\frac{E}{4G}} } \right)$$

L = 14 in, δmax = 0.25 in, P = 6000 Lb, E = 30 × 106 psi, G = 12 × 106 psi, τmax = 13,600 psi, σmax = 30,000 psi.

The convergence curve graph is shown in Fig. 10. Results are shown in Table 10. The solution calculated by MDAOA is better than all other algorithms in the comparison group.

Fig. 10
figure 10

Convergence curves of Welded beam design problem

4.4.4 Speed reducer problem

In this engineering design problem, the goal is to choose parameters for a speed reducer used in a small aircraft that yields minimum weight. It has seven design variables and eleven constraints. These are: teeth module, face width, number of teeth on pinion, length between bearings for both first and second shafts, and diameters of both first and second shafts. Schematic of the speed reducer design problem is presented in Fig. 11.

Fig. 11
figure 11

Schematic of speed reducer design problem

The problem is mathematically formulated as:

$$x = \left[ {x_{1} , x_{2} , x_{3} , x_{4} , x_{5} , x_{6} , x_{7} } \right]$$
(40)
$$\begin{aligned} f\left( x \right) & = 0.7854x_{1} x_{2}^{2} \times { }\left( {3.3333x_{3}^{2} { } + { }14.9334x_{3} { } - { }43.0934} \right) \\ & \quad - 1.508x_{1} \left( {x_{6}^{2} + x_{7}^{2} } \right) + 7.4777 \left( {x_{6}^{3} + x_{7}^{3} } \right) + 0.7854 \left( {x_{4} x_{6}^{2} + x_{5} x_{7}^{2} } \right) \\ \end{aligned}$$
(41)

Constraints:

$$\begin{aligned} g_{1} \left( x \right) & = \frac{27}{{x_{1} x_{2}^{2} x_{3} }} - 1{ } \le 0, \\ g_{2} \left( x \right) & = \frac{397.5}{{x_{1} x_{2}^{2} x_{3}^{2} }} - 1 \le 0, \\ g_{3} \left( x \right) & = \frac{{1.93x_{4}^{3} }}{{x_{2} x_{3} x_{6}^{4} }} - 1 { } \le 0, \\ g_{4} \left( x \right) & = \frac{{1.93x_{5}^{3} }}{{x_{2} x_{3} x_{7}^{4} }} - 1 \le 0, \\ g_{5} \left( x \right) & = \frac{1}{{110x_{6}^{3} }}\sqrt {\left( {\frac{{745x_{4} }}{{x_{2} x_{3} }}} \right)^{2} + 16.9 \times 10^{6} } - 1{ } \le 0, \\ g_{6} \left( x \right) & = \frac{1}{{85x_{7}^{3} }}\sqrt {\left( {\frac{{745x_{5} }}{{x_{2} x_{3} }}} \right)^{2} + 157.5 \times 10^{6} } - 1 \le 0, \\ g_{7} \left( x \right) & = \frac{{x_{2} x_{3} }}{40} - 1{ } \le 0, \\ g_{8} \left( x \right) & = \frac{{5x_{2} }}{{x_{1} }} - 1{ } \le 0, \\ g_{9} \left( x \right) & = \frac{{x_{1} }}{{12x_{2} }} - 1{ } \le 0, \\ g_{10} \left( x \right) & = \frac{{1.5x_{6} + 1.9}}{{x_{4} }} - 1{ } \le 0, \\ g_{11} \left( x \right) & = \frac{{1.1x_{7} + 1.9}}{{x_{5} }} - 1{ } \le 0, \\ \end{aligned}$$
(42)

Range of variables

$$2.6 \le x_{1} \le 3.6, 0.7 \le x_{2} \le 0.8, 17 \le x_{3} \le 28, 7.3 \le x_{4} \le 8.3, 7.8 \le x_{5} \le 8.3,$$
$$2.9 \le x_{6} \le 3.9, 5 \le x_{7} \le 5.5$$

The comparison of convergence curves of the algorithms are shown in Fig. 12. According to results given in Table 10, the proposed algorithm returns superior results along with HBA.

Fig. 12
figure 12

Convergence curves of speed reducer design problem

4.4.5 Three bar truss design

Three bar truss design problem is a weight minimization problem as shown in Fig. 13. Buckling, deflection, and stress are constraints of the system.

Fig. 13
figure 13

Schematic of three bar truss design problem

The mathematical formulation of the problem is:

$$x = \left[ {x_{1} , x_{2} } \right] = \left[ {A_1, A_2} \right] ; A_1=A_3$$
(43)
$$f\left( x \right) = \left( {2\sqrt {2 } x_{1} + x_{2} } \right) \times L$$
(44)

Constraints:

$$\begin{aligned} g_{1} \left( x \right) & = \frac{{\sqrt {2 } x_{1} + x_{2} }}{{\sqrt {2 } x_{1}^{2} + 2x_{1} x_{2} }}P - \sigma { } \le 0, \\ g_{2} \left( x \right) & = \frac{{x_{2} }}{{\sqrt {2 } x_{1}^{2} + 2x_{1} x_{2} }}P - \sigma \le 0, \\ g_{3} \left( x \right) & = \frac{1}{{\sqrt {2 } x_{2} + x_{1} }}P - \sigma { } \le 0, \\ \end{aligned}$$
(45)

Range of variables

$$0 \le x_{1} \le 1, 0 \le x_{2} \le 1$$

where

$$L = 100\;{\text{cm}}, P = 2\;{\text{kN}}/{\text{cm}}^{2} , \sigma = 2 \;{\text{kN}}/{\text{cm}}^{2}$$

The convergence curve of the problem for MDAOA and the other algorithms are shown in Fig. 14. Results in Table 10 shows that MDAOA returned the minimum result for all functions.

Fig. 14
figure 14

Convergence curves of three bar truss design problem

5 Conclusion

In this study, a novel MDAOA optimization algorithm is proposed based on modifying the AOA. The goal of the modification is to avoid early convergence and improve balance between exploitation and exploration. This is accomplished by two phase mechanism: optimizing the candidate positions of objects using the dimension learning-based strategy and calculating predetermined five parameters used in the original AOA.

MDAOA uses an additional measure to select the winning object and update the existing location. The DL strategy uses a diverse approach to form a neighborhood for individual object in which neighborhood data can be conveyed among other objects. The learning dimension used in the proposed work can improve the balance between exploitation and exploration by means of four phases: initiation, movement/transfer, selection/updating, and termination strategies.

In the second phase of modification, five constant values of AOA are computed by another optimization algorithm, HBA. These parameters are computed once for each optimization problem. Next, the new modified algorithm MDAOA is applied to a wide range of problems. The efficiency of the proposed algorithm is tested on 13 standard benchmark functions, 29 CEC 2017 benchmark functions, optimal placement of EVCSs on the IEEE-33 distribution system and five real-life engineering problems. Furthermore, results of the proposed modified algorithm are compared with ten algorithms published in recent years. Comparison includes statistical analysis employing Friedman test with Wilcoxon rank-sum as post hoc test for pairwise comparisons. Experimental results and statistical analysis indicate that MDAOA performed well with consistently low standard deviation values. MDAOA returned best results in all of 13 standard benchmarks, 26 of 29 CEC 2017 benchmarks (89.65%), optimal placement of EVCSs problem and all of five real-life engineering problems. Overall success rate is 45 out of 48 problems (93.75%). Although MDAOA shared the lead for 13 functions, it was the only algorithm that provided the best results for 32 functions. The algorithm presented in the study can be especially used in engineering optimization studies as well as constrained, unimodal, multimodal, hybrid, and composition functions.

Proposed method improves the performance of the original AOA; however, it requires a preprocessing for parameter optimization using another algorithm. Future studies can be conducted to eliminate this step to develop a self-adaptive approach. In addition, more real-life optimization problems such as the optimal placement of EVCSs problem solved in this study can be identified in order to be optimized by MDAOA.