1 Introduction

Classic mathematical optimization methods mostly do not have the proper capability to solve complicated optimization problems, lacking the reliability to find the global optimum for problems [1]. Novel intelligent optimization methods and algorithms emerged due to the limitations and defects of these classic algorithms. They gradually proceed toward the optimum locations of the problem based on an intelligent search [2].

The existing results and evidence from articles indicate that utilizing high-convergent metaheuristic algorithms in different types of routing over IoT networks enabled with wireless sensors results in high-accurate routing [3, 4].

Many studies have addressed collective intelligence algorithms. The Cheetah Optimizer (CO) algorithm successfully solved large-scale challenging optimization problems by improving population diversity and convergence. It provided a significant advantage over different standards and improved hybrid algorithms [5]. The Starling Murmuration Optimizer (SMO) algorithm is presented to solve complicated engineering optimization problems. The empirical results show its competitiveness against other high-level algorithms in terms of the quality of the solution and convergence ratio, which provides a more accurate solution [6]. The Pelican Optimizer Algorithm (POA) is presented to solve optimization problems. Its simulation and analytics in providing optimum solutions for optimization problems indicate it provides better performance and more competitiveness over eight competing algorithms by establishing a relative balance between exploration and exploitation [7]. The Artificial Hummingbird Algorithm (AHA) is more competitive than other metaheuristic algorithms, providing higher-quality solutions with lower control parameters. It outperforms existing optimization techniques regarding computational load and solution accuracy [8]. African Vulture Optimization Algorithm (AVOA), a new meta-heuristic algorithm inspired by the lifestyle and foraging behaviors of African vultures for food, was proposed for global optimization problems. To evaluate the performance of AVOA, 36 standard benchmark functions were used, and the results indicate the superiority of the proposed algorithm over several existing ones [9]. Reptile Search Algorithm (RSA) is a nature-inspired meta-heuristic optimizer. The performance of RSA was evaluated using twenty-three classical test functions. Results showed that RSA performed significantly better than other well-known optimization algorithms and can handle various constraint problems and solve single-objective optimization problems involving controversial variables [10]. Coati Optimization Algorithm (COA) has been proposed to model the natural behavior of coatis. COA's performance was evaluated on fifty-one objective functions, with its results compared to those of eleven well-known meta-heuristic algorithms. Simulated results showed that COA, by balancing exploration in global search and exploitation in local search, had a clear superiority over the compared algorithms and is much more competitive [11]. Remora Optimization Algorithm (ROA) is a bionic-based, meta-heuristic algorithm inspired by nature. This algorithm is more inclined to provide a new idea for the memetic algorithm. A total of 29 benchmark functions and five real engineering problems are used to test the validity of ROA. The experimental results demonstrate ROA's remarkable competitive capability against many high-level algorithms [12]. Whale Optimization Algorithm (WOA) has been tested on 29 mathematical optimization problems and six structural design problems. The results indicate that it can compete sufficiently with other modern meta-heuristic methods [13]. Hawk fire optimizer (FHO) as a new meta-heuristic algorithm has provided acceptable results in dealing with real size optimization problems of CEC 2020. From a computational point of view, FHO can converge on the best global mathematical test functions by requiring a lower number of objective function evaluations [14]. Energy Valley Optimizer (EVO) as a new meta-heuristic algorithm that can outperform other alternative meta-heuristic algorithms in unconstrained mathematical test functions and converge to the best global solution with the least evaluation of the objective function[15]. The dung beetle optimizer (DBO) considers both global exploration and local exploitation, and has a fast convergence rate and satisfactory solution accuracy, which can be effectively used to solve real-world application problems[16].

Numerous studies are also performed on dragonfly-based collective intelligent algorithms(which used in the method proposed in this article) in recent years. Hybrid Memory-Based Dragonfly Algorithm with Differential Evolution (DADE) is a hybrid of the dragonfly and differential evolution algorithms. Its search power is enhanced using the leap and crossover mechanism, and its convergent power is increased by being enabled with memory for utilizing the best solutions throughout the whole process of optimization [17]. A hybrid of Sine–Cosine and Dragonfly algorithm (SC-DA) utilizes the angular behavior of the sine and cosine functions in the sine–cosine algorithm; the accuracy of the dragonflies in converging to the optimal point is increased due to its angular movements [18]. Biogeography-based and Mexican hat wavelet algorithms has combined to improve the dragonfly algorithm(BMDA algorithm). It enhanced the search power in the hybrid model compared to the standard dragonfly algorithm due to the search variety of the two applied algorithms [19]. A Modified Dragonfly Optimization Algorithm Using Brownian Motion (DABM algorithm) adds a local search to the dragonfly algorithm using the kinetic energy equation, enhancing the convergence power to the optimum point [20]. A hybridization of Opposition-Based Learning (OBL) algorithm and dragonfly algorithm (DA-OBL) has enhanced the extraction power using bidirectional movement in dragonfly algorithm operators [21]. Considering the weakness of the extraction phase in the dragonfly algorithm, [22] and [23] used quantum physics and gradient, respectively, to enhance the convergent power of the algorithm. In another study, based on binary dragonfly optimization algorithm, a multi-level method for optimal planning of network capacity expansion considering distributed generations with the aim of minimizing investment cost, operation, maintenance and reliability cost for network development is presented [24]. One of the capabilities of the dragonfly algorithm, which is used to detect tool wear in the manufacturing industry and leads to increased quality, increased productivity, and reduced downtime, is using it discretely in selecting features in creating effective machine learning methods [25]. The dragonfly algorithm can be improved in determining the optimal size of the hybrid energy system based on wind-solar, and lead to an increase in productivity in the energy production industry [26]. In multi-objective optimization, dragonfly algorithm can be used in electricity distribution industry to reduce its losses [27]. To study the efficiency of improved dragonfly methods, a conceptual comparison of its types for optimization problems has been done. And seven methods including hybrid memory-based dragonfly algorithm with differential evolution (DADE), quantum-behaved and Gaussian mutational dragonfly algorithm (QGDA), memory-based hybrid dragonfly algorithm (MHDA), chaotic dragonfly algorithm (CDA), biogeography-based Mexican hat wavelet dragonfly algorithm (BMDA), hybrid Nelder–Mead algorithm and dragonfly algorithm (INMDA) and hybridization of dragonfly algorithm and artificial bee colony (HDA) have been investigated and the Friedman test was used for ranking, which showed the effectiveness of the QGDA method [28].

One application of the metaheuristic algorithms is in the clustering of Internet of Things network nodes. For example, a study entitled " Cluster-Based Energy-Efficient Routing in Internet of Things," attempts to choose the best cluster heads among the sensors to reduce energy consumption in the network nodes and eventually reduce the whole network's energy consumption [29].

Some research employed metaheuristic algorithms in the field of network routing and clustering, including energy-based clustering energy in inter-vehicular networks [30], routing of massive data transmission in the IoT network [31], routing in mobile wireless networks [32, 33], consumed energy in the whole network, data transmission power and the effect of the distance of mobile wireless network's nodes on its stability [34,35,36,37], network self-stability [38], data transmission protocol in IoT networks [39], and routing in IoT networks considering energy reduction and stability enhancement simultaneously [40].

Routing and clustering in wireless networks and the Internet of Things network severely depend on the number of network nodes and evaluation criteria, including energy consumption reduction and enhancing stability. Research results indicate using metaheuristic methods or a hybrid of such methods in wireless networks is one of the most common methods to achieve higher accuracy in the routing process.

Utilizing metaheuristic methods—including ant colony, particle algorithm, a hybrid of the ant colony and particle algorithm, and hybrid of the ant colony and firefly algorithm [41,42,43,44,45] for routing in various kinds of Internet of things networks indicates their efficiency in choosing the best path and improving the results. Furthermore, Metaheuristic methods' analogy shows their capability to solve problems by single-objective and multi-objective approaches, optimizing multiple effective factors in routing at their best [46].

The most crucial IoT network clustering method is the "Low-Energy Adaptive Clustering Hierarchy (LEACH)" protocol. Some researchers have compared LEACH-based hierarchical routing protocols such as LEACH-C, MM- LEACH, TL- LEACH, V- LEACH, and MOD- LEACH, based on energy waste or specific criteria like the number of CHs and the number of leaps [47, 48].

The proposed algorithm in this study upgrades some main objectives to develop LEACH-derived protocols for wireless sensor networks (WSN), including enhanced WSN performance, energy efficiency, energy distribution among nodes, increased scalability, selecting the best node in each cluster as the cluster head, increased security in WSN, decreased network delay, and increased network stability.

Therefore, this article introduces a novel algorithm entitled "Lotus Effect Algorithm (LEA)" derived from the dragonfly algorithm [49] to improve the routing process and further reduce the consumed energy. The remaining is as follows:

Section 2 provides the source of inspiration and biological basics of the algorithm. The mathematical models and the Lotus Flower Algorithm are presented in Sect. 3 The applications of the LEA for node clustering in IoT are also introduced in this section. Section 4 covers the evaluation and results of performing the proposed LEA on test functions. The algorithm is also applied to two real-world engineering optimization problems with multiple restrictions Such as welded beam design optimization problem and the speed-reducer problem applied in a gearbox practical to confirm its performance. In the rest of Sect. 4, LEA application results on improving the clustering of IoT networks are compared to the results captured from other dragonfly-based methods. Finally, the paper concludes with Sect. 5, which includes the conclusion and proposal for future studies.

2 Inspiration

The effect of lotus indicates the super-hydrophobic and self-cleaning characteristics of the lotus leaves. This effect has attracted much attention, introducing various applications in different levels of the Lotus effect since its explanation by Professor Wilhelm Bartlett for the first time in 1977 [50, 51]. However, super-hydrophobic surfaces have been studied and considered since the 1950s [52].

The self-cleaning feature is a Nano-science achievement inspired by the Lotus leaves. Investigating these flowers' leaves on a Nano-scale indicates water droplets fall down the leaves due to the hydrophobicity property of their uneven surfaces [53].

The property of the plant's leaves causes water to collect the soil on the leaves and slide down the leaves without being absorbed. This case could be considered a local search in terms of the movements of the drops on the leaves, which is described in its best way in Fig. 1—a picture of a Lotus flower and its leaves with some water droplets on them [54, 55].

Fig. 1
figure 1

adopted from Barthlott et al. [54] and Collins et al. [55])

Lotus (Nelumbo nucifera) plant (

The pollination of the Lotus flower and its leaves' properties has inspired the proposed LEA. This algorithm is founded based on the efficiency of the dragonfly algorithm, enabled by pollination [56] and a local search related to water movement on the leaves of the Lotus flower.

The pollination comprises two main processes, including the biological process (cross-pollination) and the non-biological process (self-pollination) which are the exploration and extraction phases in the LEA.

In biological pollination, the pollen is transferred from one flower to various plants' flowers with the help of pollinators such as insects and birds. This process takes place over a long period. It is considered a global pollination and search of the whole problem space (exploration), which is modeled by the mathematical model of the dragonfly optimization algorithm [49].

In the non-biological process (self-pollination), fertilization is performed from the flowers' pollen of the same plant. In this kind of pollination, factors such as wind and release in water helps pollinate these kinds of flowering plants. This self-pollination (self-fertilization) is considered local pollination and is used for searching and extracting the local optimum. For the property of the leaves on which water slides down without being absorbed by them, a double local search is considered for the movement of the droplets over the leaves.

3 Lotus flower algorithm introduction

Considering the necessity of the exploration and extraction processes in metaheuristic algorithms as well as their existence in the Lotus flower's life, the idea is based on the following items:

  • Exploration as insects like dragonflies spread the seed, their movements are extendable in this regard (refer to the dragonfly optimization algorithm)

  • Extraction as the flower blooms cluster around the center of a focal core, it may be an inspiration for local search by using a multi-population through clustering the search factors (derived from Lotus blooms).

  • Extraction reinforcement moving water over this plant's leaves and exiting from the closest opening on the leave's surface may inspire a local search in the algorithm to find the optimum points (utilizing local search algorithms such as the Hill Climbing Algorithm—HCA)

Note: all variables used in this article are explained in Appendix A.

3.1 LEA exploration phase

Global pollination (biological) is performed by dragonflies [49] in the LEA, which is identical to the exploration phase of the proposed algorithm. Three basic principles of the swarms of insects—separation, alignment, and cohesion—as well as two concepts of food and enemy, are considered in the dragonfly algorithm to simulate the intelligent behavior of dragonflies. Separation indicates preventing an individual from colliding with neighboring individuals. Alignment indicates the velocity matching of individuals to that of other individuals in the neighborhood. Cohesion indicates the tendency of individuals toward the center of the mass of the neighborhood. The main objective for each swarm is to survive. Therefore, individuals must all be attracted toward food sources and distracted from enemies. Considering these two behaviors, there are five factors in updating the individuals' positions in the swarm, each of which could be mathematically modeled. Separation is calculated as follows [49]:

$$S_i^t = - \mathop \sum \limits_{j = 1}^N X_i^t - X_j^t$$
(1)

where \(X_i\) indicates the current individual's position with index i in the evolution iteration t, \(X_j\) indicates the position of jth individual in the neighborhood in the evolution iteration t, and N is the number of individuals in the neighborhood.

Alignment is calculated as follows:

$$A_i^t = \frac{{\sum_{j = 1}^N X_j^t }}{N}$$
(2)

where \(X_j\) indicates the velocity of jth individual in the neighborhood in the evolution iteration t.

Cohesion is calculated as follows:

$$C_i^t = \frac{{\sum_{j = 1}^N X_j^t }}{N} - X_i^t$$
(3)

where \(X_i\) indicates the current individual's position with index i in the evolution iteration t, N is the number of neighbors, and \(X_j\) indicates the position of jth individual in the neighborhood in the evolution iteration t.

Attraction toward food sources is calculated as follows:

$$F_i^t = X_+^t - X_i^t$$
(4)

where \(X_i\) indicates the current individual's position with index i in the evolution iteration t, \(X_+^t\) is the location of the food source which is resulted from the current evolution iteration, i.e., t, and is the best-found answer.

Enemy distraction is calculated as follows:

$$E_i^t = X_-^t + X_i^t$$
(5)

where \(X_i^t\) indicates the current individual's position with index i in the evolution iteration t, \(X_-^t\) is the enemy position which is resulted from the current evolution iteration, i.e., t, and is the worst-found answer.

The behavior of dragonflies is a combination of these five assumed patterns. In order to update artificial dragonflies' position in the search space and simulate their movements, two vectors are considered: step length and position. The step length (in brief, step) is similar to the velocity vector in the Particle Swarm Optimization (PSO) algorithm and the dragonfly algorithm is developed based on the framework of the PSO algorithm. The step or velocity vector shows the movement direction of the dragonflies and is defined as follows:

$$\Delta X_i^{t + 1} = \left( {sS_i^t + aA_i^t + cC_i^t + fF_i^t + eE_i^t } \right) + w\Delta X_i^t$$
(6)

where s is the separation coefficient, \(S_i^t\) is the separation degree of the ith individual in evolution iteration i, a is the alignment coefficient, \(A_{ i}^t\) is the alignment of the ith individual, c is the cohesion coefficient, \(C_i^t\) is the cohesion of the ith individual, f is the food factor, \(F_i^t\) is the food source of the ith individual, e is the enemy factor, \(E_i^t\) is the enemy of the ith individual, w is the inertia weight, and finally, t is the iteration counter of the algorithm.

After calculating the step vector, the position vectors are calculated as follows:

$$X_i^{t + 1} = X_i^t + w\Delta X_i^{t + 1}$$
(7)

where t is the iteration counter of the algorithm.

Utilizing factors separation (s), alignment (a), cohesion (c), food (f), and enemy (e), various explorative and exploitative behaviors can be performed during optimization. The location of the food source and the position of the enemy are obtained from the best and the worst answers found among the whole swarm. This causes convergence to the promising locations of the search space and divergence from the undesired locations in the search space. In order to improve the random/stochastic behaviors in the exploration of the artificial dragonflies, they need to fly around the search space with a random step length when there is no solution in their neighborhood. In this case, the position of the dragonflies is updated using the following relation:

$$X_i^{t + 1} = X_i^t + {\text{Levy}}(d) \times X_i^t$$
(8)

where t is the current iteration counter, and d is the dimensions of the position vector. Levy is calculated using the following relation:

$${\text{Levy}}\left( x \right) = 0.01 \times \frac{r_1 \times \sigma }{{\left| {r_2 } \right|^{\frac{1}{\beta }} }}$$
(9)

where r1 and r2 are two random numbers in the interval between zero and one, and \(\beta\) is a constant number. \(\sigma\) is calculated using the following relation:

$$\sigma = \left( {\frac{{{\rm{\mathcal{L}}}\left( {1 + \beta } \right) \times \sin \left( {\frac{\pi \beta }{2}} \right)}}{{{\rm{\mathcal{L}}}\left( {\frac{1 + \beta }{2}} \right) \times \beta \times 2^{\left( {\frac{\beta - 1}{2}} \right)} }}} \right)^{\frac{1}{\beta }}$$
(10)

where in:

$${\rm{\mathcal{L}}}\left( x \right) = \left( {x - 1} \right)!$$
(11)

3.2 LEA exploitation phase

Local pollination (self-fertilization) is the extraction phase of the proposed algorithm. In this kind of pollination, a coefficient specifies the size of the growth area for each flower around the best-found flower. The best-found solution is the basis of movement and other solutions move toward it. The steps are taken longer at the beginning of the movement algorithm and shorter at the end.

$$X_i^{t + 1} = X_i^t + R \left( {X_i^t - g^* } \right)$$
(12)

where \(X_{t + 1}\) is the location of the pollen in (t + 1)th iteration and \(g^*\) is the best-found pollen location among all evolution iterations so far. R is the growth area, shrinking following algorithm iterations. In fact, the movement steps are longer at the beginning of the algorithm and get shorter by reaching the end of the algorithm until it converges to the optimum.

$$R = 2 e ^{ - \left( {\frac{4 t }{L}} \right)^2 }$$
(13)

where t is the current evolution iteration of the algorithm and L is the ultimate iteration number.

Note In the proposed algorithm, the values of the neighborhood radius of dragonflies(in the exploration phase) and R(in the exploitation phase) are used to balance between exploration and exploitation. The radius of the dragonflies is an incremental value that ultimately makes them cohesive And the variable R is adjusted according to the number of repetitions of the algorithm and controls the movement steps from long to small steps.

3.3 LEA exploitation phase reinforcement

A local search is considered to model the movement of water over the Lotus flower's leaves, using water drops. By moving the water drops toward the first pits over the leaf, they will be filled and the water overflows the leaf. Each swarm member, i.e., solution is called a drop with position (\(X_i\)) and initial velocity (\(V_i\)), moving in the problem search space to find the optimal solution. A drop is positioned in the nearest local optimum to it, after formation on the leaf. This local optimum is called a pit. Each pit has a capacity for holding drops depending on its depth (fitness). Figure 2a illustrates some pits with different capacities.

Fig. 2
figure 2

Some pits with different capacities and the search operation of the overflowed water in the state space

The deepest pit is considered the most valuable pit (the pit with the best fitness) in each iteration. In the local search modeling, a velocity vector is considered for each drop whose initial value is the length of the primitive step that is received from the input; after each iteration, the movement vector adds up to its velocity vector, then is added to its velocity vector. The drop movement equation is shown in relation (17). These stages are repeated β times, and the β coefficient is received from the input.

In this modeling, each pit has a capacity whose depth determines its fitness. The more the depth, the more the capacity. The capacity of all pits is calculated as follows in each iteration:

$$c_i^t = \frac{{\left( {\left| {f_i^t - f_{{\text{Max}}} } \right|} \right) \times {\text{const}}}}{{\left( {\left| {f_{{\text{Min}}} - f_{{\text{Max}}} } \right|} \right)}}$$
(14)

where \(c_i^t\) is the capacity of the pit i in the evolution iteration t, \(f_i^t\) is the size of the ith pit in the evolution iteration t, \(f_{{\text{Max}}}\) is the size of the biggest fitness among the pits, and \(f_{{\text{Min}}}\) is the size of the smallest fitness among the pits. \({\text{Const}}\) is a constant number indicating the maximum capacity of a pit for an objective function. Figure 2b illustrates five pits in the first iteration of algorithm execution. In each iteration, a random amount of the pits' average capacity is added to each pit. In each pit, if the drops exceed the capacity, that pit is excluded and its water flows on the leaf surface. Figure 2c illustrates water overflow from two limited-capacity pits in the first iteration and the identification of a pit with more capacity. The direction of e overflowed water is toward a pit with more capacity compared to it. The pit is selected randomly among existing pits with higher capacity. The selection is based on a priority such that the more a pit's capacity, the more its selection probability. Relation (15) captures the probability of selecting a pit among the existing pits.

$${\text{Select}}_i^t = \frac{c_i^t }{{\sum_{j = 0}^k c_j^t }}$$
(15)

where \({\text{Select}}_i^t\) is the probability of selecting a pit in the evolution iteration t, \(c_i^t\) is the capacity of the ith pit in the evolution iteration t, and k is the number of pits whose capacity is higher than the overflowed pit.

After selecting a pit, the overflowed water moves toward that pit. While encountering a pit with more capacity than the source pit along the way, as much as the overflowed water pours into the higher-capacity pit and movement stops; otherwise, it continues to reach the selected pit, and as much as the overflowed water is poured into it. The source pit is removed at the end of the movement. The more the capacity of a pit, the more the probability of reusing it during algorithm execution (it will not be removed); it is more likely selected to capture the overflowed water flowing toward it. In this movement, unlike the PSO algorithm, the particles (drops) have no memory for holding the best position during the search. The knowledge of reasonable solutions is not maintained for any drop. A drop can locate other drops' current position, and the best drop's position is known at any moment.

Relation (16) shows the velocity and position of the drops' movement during local search, and relation (17) shows the velocity and position of the drops' movement overflowing from a pit on the surface.

$$V_i^{t + 1} = {\text{q}} \times V_i^t \quad {\text{and}}\quad X_i^{t + 1} = X_i^t + V_i^{t + 1}$$
(16)
$$V_i^{t + 1} = V_i^t + {\text{Rand}}\left( {X_{{\text{deep}} {\text{pit}}}^t - X_i^t } \right)\quad {\text{and}}\quad X_i^{t + 1} = X_i^t + V_i^{t + 1}$$
(17)

In relations 16 and 17, \(X_{{\text{deep}} {\text{pit}}}^t\) is the current position of the deepest pit in the evolution iteration t, \(V_i^t\) is the current velocity of drop i in the evolution iteration t, \(X_i^t\) is the current position of drop i in the evolution iteration t, and q is the speed increment coefficient.

In an iterative process, two interactions take place between the drops by evaluating their competency and fitness criteria in order to improve the swarm experience (the deepest pit): 1. a pit candidating to receive the overflowed water from other pits (represented by a drop), and 2. increasing drops' competency. The execution steps of the LEA and its flowchart are provided in the remaining.

3.4 LEA steps

The algorithm starts with generating initial solutions. Then, the best solution is specified after evaluation. The best solution is based on the other drops' movement in each iteration. In the proposed method, the possible solutions are called "flower" or "dragonfly" which are structurally alike and solely provided for a better understanding of the steps. The search process for each possible solution is performed either using the dragonfly algorithm's mechanism or using local search. The pseudo-code and Flow chart of the algorithm is provided in the following.

figure a

Figure 3 shows the flow chart of the proposed model.

Fig. 3
figure 3

Flow chart of the proposed model

3.5 LEA application in node clustering in IoT

One application of the LEA is in improving clustering and the accurate selection of the proper cluster heads in IoT networks. Sensors' performance changes according to the hierarchical models. Some sensors collect data and some send it. In the LEA, the network is divided into separate clusters; the cluster head collects data from each wireless sensor periodically according to the Time Division Multiple Access (TDMA) and compresses it (to seme low extent). The data, then, are sent directly/indirectly to the base station in multiple phases. The proposed method changes the cluster head periodically, causing load balancing in the network. The primary LEA operations are categorized into two different phases: first, setup which is composed of two steps: clustering and cluster head determination; and second, steady-state which focuses on data integration, collection, and sending to the base station. The first phase imposes less overload on the protocols. In the setup phase, the accurate selection of the cluster head is performed periodically and the consumed energy is distributed among network nodes. Accurate periodic selection of the cluster head requires each node for which a random number to be generated in the [0, 1] interval. The random number is compared to the cluster head threshold (captured by relation 18) [29].

$$T(n) = \left\{ {\begin{array}{*{20}c} {\frac{p}{{1 - p\left( {r{\text{mod}}\left( \frac{1}{p} \right)} \right)}}} & { \in n} \\ 0 & {{\text{others}}} \\ \end{array} } \right.$$
(18)

where T is the cluster head threshold, r is the current iteration, p is the percentage of headers relative to all nodes in the network, and n is the number of the nodes not selected as a cluster head in the 1/p final iteration [21]. At last, a node is selected as the cluster head whose value is less that the threshold. In this method, the clusters are managed locally and information on the general network is not required. Each cluster integrates data and saves energy, and there is no need for the nodes to send data directly toward the base station. Finally, the selected cluster head declares new roles for the other network nodes; then they link to the cluster. Each cluster's cluster head generates a TDMA-based schedule—the allocated time intervals to each cluster member—and distributes it in the cluster. The second phase starts after the completion of the first phase. In this step, the nodes gather the data which are assigned to them during different periods and send it to the cluster head node. Note that data collection performs periodically. The steps of the proposed method for cluster heads selection are as follows:

  1. 1.

    Generating a network including different sensors in the application space.

  2. 2.

    Random generation of the cluster heads as search agents using the Lotus effect algorithm.

  3. 3.

    Spreading the sensors among the cluster heads according to the second step.

    1. 3-1.

      Fitness evaluation of each search engine.

    2. 3-2.

      Changing the search agents based on the Lotus effect algorithm's operators.

    3. 3-3.

      Stop condition, completion of Lotus effect algorithm's iterations, otherwise go to step 3–1.

  4. 4.

    Sending data over the network for a specific period based on time.

  5. 5.

    End of the algorithm if the lifetime of the network is inspired; otherwise, go to step 2 with live sensors (changing the length of the search agents).

Therefore, accurate and optimal selection of the cluster heads is vital in a network, such that on the one hand, the data transmission energy for the wireless sensors to the cluster heads be minimal and, on the other hand, a desired multi-step path could be designed among the cluster heads for sending their data to the base station. In this case, the sensor network's lifetime would be increased.

A solution is provided for the given problem (cluster head selection in the network) based on the proposed method. For this purpose, a network is considered in which some wireless sensors are distributed randomly and stochastically. The base station is assumed to be aware of the physical position of all nodes using a geolocator-like tool; the aim is to cluster the wireless sensors. After clustering, each sensor collects data from the environment and directly (single step) sends it to its related cluster head. Each cluster head collects the received data and sends it through multiple steps (using other cluster heads) to the base station. After selecting the cluster heads, each wireless sensor is assigned to the closest cluster head among the surrounding cluster heads in its communication range. Accurate selection of the cluster heads and finding a multi-step path among the cluster heads is performed in a centralized manner in the base station and is sent to all nodes. In this article. it is assumed that the base station has no energy constraint and the network is fixed, i.e., the position of the sensors and cluster heads will be fixed and the same after placement in the environment.

Each time, data collection and the gathered data transmission from all the cluster heads to the base station is considered as one period. Therefore, network lifetime is defined as the number of elapsed periods until the first cluster head's energy depletes as in relation (19):

$$E_i = E_{R_i } + E_{T_i }$$
(19)

where \(E_{R_i }\) is the consumed energy of node i for receiving data and is calculated using relation (20):

$$E_{R_i } = \alpha_1 .b_i$$
(20)

where bi represents the number of bits received by the ith cluster head in one period and \(\alpha_1\) is the energy coefficient of the received energy. In relation (19), \(E_{T_i { }}\) is the consumed energy by node i for sending data to node j (j could be a base station or a different cluster head) and is calculated using relation (21):

$$E_{T_i } = \alpha_2 .b + \beta .b_i .d_i .d_{i.j}^m$$
(21)

where \(d_{i.j}^m\) is the Euclidean distance between nodes i and j, \(\alpha_2\) is the transmission energy coefficient, variable b is the reinforcement coefficient, and m is the path attenuator coefficient, a number between 2 and 4.

The proposed method selects the desired cluster heads in the LEA. This research aims to select nodes as cluster heads that make energy consumption minimum and be able to establish an appropriate multi-step path among them. The proposed method is employed with binary encoding to evaluate LEA's application in cluster head selection in the network. Each solution is equivalent to one cluster head selection style. Each solution is considered a binary array whose length is equal to the number of whole network sensors (each bit is equivalent to one sensor). A bit of value "1" in the array means its equivalent node is selected as a cluster head and value "0" represents ordinary nodes.

After determining the cluster heads, it can be said that each ordinary cluster node depends on the closest cluster head. The initial population of the proposed algorithm is generated randomly and stochastically and each member of the proposed population's appropriateness is evaluated using a fitness function. Each solution's fitness value is provided based on the network lifetime. The network lifetime is defined as relation (22)—the number of data collection periods until the first cluster head's energy depletes:

$$L_{{\text{net}}} = \frac{{E_{{\text{initial}}} }}{{E_{\max } }}$$
(22)

where \(E_{{\text{initial}}}\) is the cluster's initial energy which is considered identical for all cluster heads. \(E_{{\text{max}}}\) is the consumed energy of a cluster head that consumed the most amount of energy in one period of data collection. Given the number of selected cluster heads, n, it is calculated according to relation (23):

$$E_{\max } = {\text{MAX}} \,E_{{\text{CH}} _i } \quad 1 \le i \le n$$
(23)

where \(E_{{\text{CH }}_i }\) is the ith cluster's consumed energy. \(L_{{\text{net}}}\), in relation (22), represents the network's lifetime, i.e., the number of data collection periods until the most energy consumed cluster head's energy depletes. It should be noted that cluster heads consume energy for two purposes: 1—receiving data from the sensors located in their cluster, and 2—playing an intermediate role between sending and receiving data processes in multi-step routing among the cluster heads. Therefore, a cluster head's consumed energy in one data collection period is calculated using relation (24):

$$E_{{\text{CH}} _i } = E_{{\text{intra}}\, {\text{Cluster}}_i }^R + E_{{\text{inter}} \,{\text{Cluster}}_i }$$
(24)

Note The objective function in node clustering is Eq. 24, where the objective is to minimize the energy consumption in the network.where \(E_{{\text{intra}} \,{\text{Cluster}}_i }^R\) is the amount of energy that the ith cluster head consumes to receive data from sensors inside ith cluster and is calculated by relation (25):

$$E_{{\text{intra}}\, {\text{Cluster}}_i }^R = \sum \limits_{S \in C_i } E_{{\text{CH}}_i S}^R$$
(25)

where Ci is the number of the ith cluster's sensors, and \(E_{{\text{CH}}_i {\text{S}}}^R\) is the energy consumed by the ith cluster head to receive data from sensor S and is calculated by relation (20) pertaining to the first-order radio style. \(E_{{\text{intra}} {\text{Cluster}}_i }\) in relation (24) is the amount of energy consumed by the cluster head for routing among the cluster heads, receiving data from the preceding cluster heads, and sending data to the succeeding cluster head. The amount of this energy depends on the multi-step routing performed among the selected cluster heads.

4 Evaluation and conclusion

The results of the proposed LEA are evaluated and compared against other optimization algorithms in three parts. First, considering the basis of the proposed algorithm is the dragonfly algorithm inspired by the Lotus flower, the LEA results are compared against other dragonfly-based algorithms using benchmark function set CEC-BC-2017 including 29 benchmark functions which includes 23 optimization functions and 6 combination optimization functions of these 23 functions [57, 58]. Second, the results of applying the LEA for two engineering optimization problems with multiple constraints are compared to the results of a series of optimization methods. Third, the results of the LEA applied to the clustering of IoT networks improvement are compared to the results of other dragonfly-based methods applied for the same purpose In addition, all simulation environments are implemented under the MATLAB platform With Windows 10.

Note The specifications of the test functions are explained in Appendix B.

4.1 LEA results on test functions

Because the basis of the proposed algorithm is the dragonfly algorithm inspired by the Lotus flower, the LEA results are compared against improved dragonfly algorithms on the test functions. These benchmark test functions include high-dimension and hybrid functions and pose a good comparison challenge for optimization methods. There used 30 search agents with 500 iterations and a maximum of 15,000 evaluation functions for all the benchmark functions to be pretty compared to the proposed method. To evaluate the methods on the set of benchmark functions, ranking with Fredman's benchmark was used.

Functions 1–7 comprise two dimensions and have one optimum, functions 8–13 are high-dimension functions with multiple optima, functions 14–23 are fixed-dimension functions with multiple optima, and finally, functions 24–29 comprise hybrid functions with high complexity. Evaluation of the benchmark function set is performed using Fredman's ranking test.

Table 1 compares the results to a series of collective intelligence algorithms including DA [49], DADE [17], SC-DA [18], BMDA [19], DABM [20], DA-OBL [21], QGDA [22], and DACG [23]. To thoroughly test the LEA, the benchmark function set CEC-BC-2017, including 29 benchmark functions, is used [57]. Figure 4 shows a view of some benchmark functions.

Fig. 4
figure 4

Shows a view of some benchmark functions

Table 1 Optimization method's results on the benchmark functions CEC-BC-2017

As shown in Table 1, the LEA is the best method in functions F1–F7 and has a low standard deviation in converging to the global optimum. For function F5, the LEA is situated after QGDA. The results indicate the LEA outperforms other methods in higher-dimension problems in F9–F11. Moreover, in F9 and F10, the LEA has attained the global optimum, while most other methods could not. In F8, the most challenging function in this benchmark class, the LEA holds the second rank after QGDA, almost reaching the global optimum. For F12 and F13, BMDA outperforms other methods. This is while the LEA still wins the competency, especially for F12. In benchmark functions F8–F13, the LEA holds the second performance rank after BMDA. All methods' performance is similar in problems with fixed dimensions (F14–F23). However, the LEA's results are incredibly competitive. The LEA achieved the global optimum in functions—F14, F16, F17, F18, and F19. The LEA's results are very close to the global optimum for the rest of the functions.

The hybrid functions are the most challenging benchmark functions that could be used to avoid convergence to the local optimum. The results for functions F24–F29 indicate the outperformance of the DACG method compared to the others. The LEA ranks fourth in most of these functions, behind DACG, QGDA, and DADE. However, its results are competitive with QGDA and DADE methods in most applications.

Figure 5 illustrates the comparison between all results over 29 benchmark functions of Fredman's test.

Fig. 5
figure 5

Ranking results of the optimization methods using Fredman's test

According to Fig. 5, the least number belongs to the LEA, the QGDA, and the third rank belongs to the DACG.

4.2 The LEA results on optimization problems in real world

In the following, the LEA is applied to two practical engineering optimization problems real world with multiple constraint space [59], and the results are compared against QGDA [22] and DACG [23], SSO [59], ABC [60], FF [61], PS O[62], AOA1 [63], AOA2 [64] and GOA [65] algorithms. The practical problems include welded beam design optimization problem and the speed-reducer problem applied in a gearbox [59].

The first applied problem is the welded beam design, in which a beam is designed with a uniform cross section welded to a base to endure a 6,000-pound force. Figure 6 illustrates a schematic of the beam design and the respective variables.

Fig. 6
figure 6

The variables of the welded beam design problem [59]

Length L in Fig. 6 is 14 inches. The design aims to minimize construction expenses such that an admissible composition of weld thickness h, weld length I, beam thickness t, and beam width b is found. The objective function is stated in relation (26):

$${\text{Min }}f = \left( {1 + c_1 } \right)h^2 l + c_1 tb\left( {L + I} \right)$$
(26)

where f is the expense relation including the cost of welding and material. Parameter C1 is the welding material in the volume unit (equivalent to 10,471 dollars per square inch), and C2 is the cost of the consumable raw material in the volume unit (equivalent to 4811 dollars per square inch). Any composition of t, l, h, and b is not acceptable and there are limitations on the mechanical specifications of the weld and beam. For instance, shear and normal tensions, physical limitations (length cannot be less than zero), and maximum displacement create limitations for design. The problem constraints are defined in relations (27)–(35):

$$J_{p1} \left( x \right) = 1.10471x_2 x_1^2 - 0.014811x_3 x_4 (14 + x_2 )$$
(27)

including the below constraints:

$$g_1 \left( x \right) = \tau \left( x \right) - 13600 \le 0$$
(28)
$$g_2 \left( x \right) = \sigma \left( x \right) - 30000 \le 0$$
(29)
$$g_3 \left( x \right) = x_1 - x_4 \le 0$$
(30)
$$g_{3} \left( x \right) = x_{1}^{~} - x_{4}^{{}} \le 0$$
(31)
$$g_5 \left( x \right) = 0.125 - x_1 \le 0$$
(32)
$$g_6 \left( x \right) = \delta \left( x \right) - 0.25 \le 0$$
(33)
$$g_7 \left( x \right) = 6000 - p_c \left( x \right) \le 0$$
(34)

with the below variable domains:

$$0,1 \le x_1 \le 20.1 \le x_2 \le 10 . 0.1 \le x_3 \le 10 . 0.1 \le x_4 \le 2$$
(35)

The second applied problem is the speed-reducer in a car's gearbox system that is applied in many other applications. The problem includes seven variables (x1x7) composing the size of different parts and aims to minimize the objective function for constraint number 9 for all variables [59]. Its mathematical equation is captured in relation (36):

$$\begin{aligned} J_{p2} \left( x \right) = \,& 0.7854x_1 x_2^2 \left( {3.3333x_3^2 + 14.9334x_3 43.0934} \right) \\ & \; - 1.508x_1 \left( {x_6^2 + x_7^2 } \right) + 7.477\left( {x_6^3 x_7^3 } \right) \\ & \; + 0.7854\left( {x_4 x_6^2 + x_5 x_7^2 } \right) \\ \end{aligned}$$
(36)

including the below constraints:

$$g_1 \left( x \right) = \frac{27}{{x_1 x_2^2 x_2 }} - 1 \le 0$$
(37)
$$g_2 \left( x \right) = \frac{397.5}{{x_1 x_2^2 x_3^2 }} - 1 \le 0$$
(38)
$$g_3 \left( x \right) = \frac{1.93}{{x_2 x_3 x_4^3 x_6^4 }} - 1 \le 0$$
(39)
$$g_4 \left( x \right) = \frac{1.93}{{x_2 x_3 x_5^3 x_7^4 }} - 1 \le 0$$
(40)
$$g_6 \left( x \right) = \frac{{\sqrt {{\left( {\frac{745x_4 }{{x_2 x_3 }}} \right)^2 + 157.5*10^6 }} }}{85x_7^3 } - 1 \le 0$$
(42)
$$g_7 \left( x \right) = \frac{x_2 x_3 }{{40}} - 1 \le 0$$
(43)
$$g_8 \left( x \right) = \frac{5x_2 }{{x_1 - 1}} - 1 \le 0$$
(44)
$$g_9 \left( x \right) = \frac{x_1 }{{12x_2 }} - 1 \le 0$$
(45)

with the below variable domains:

$$\begin{gathered} 6 \le x_1 \le 3.0.7 \le x_2 \le 0.8 . 17 \le x_3 \le 28 . 7.3 \le x_4 \le 8.37.3 \le x_5 \hfill \\ \quad \le 8.32.9 \le x_6 \le 3.9.5 \le x_7 \le 5.5 \hfill \\ \end{gathered}$$
(46)

The results of the proposed method over 30 different executions in terms of the average of the objective function and standard deviation are provided against the results of the other methods.

According to Table 2, the least amount over 30 different executions is better for the LEA compared to the other methods. In the welding beam design problem, after the LEA, the AOA2 and SOS methods are situated, respectively. In the speed-reducer problem, after the LEA, the AOA1 and AOA2 methods are situated with an insignificant difference, respectively. As it is known, the results of the best methods in Table 1, which include DACG and QGDA methods, could not have better results than the proposed method.

Table 2 Results comparison in terms of result precision over 30 different executions

4.3 The LEA results on IoT network clustering

This section provides the results comparison of the LEA with recent dragonfly-based algorithms in different Internet of Things network applications, including BDA [24], CH-DA [29] and DA-FA [66].

Different networks with 5–100 nodes and the parameters indicated in Table 3 are generated using MATLAB and then evaluated.

Table 3 Simulation parameters

4.3.1 Investigating the proposed method in terms of packet delivery ratio

The performance of the proposed method is investigated in terms of three parameters packet delivery ratio, energy consumption, and network lifetime of up to 200 nodes. The packet delivery rate is defined as the number of packets successfully sent over the number of whole packets injected into the network. Figure 7 shows the linear diagram of the packet delivery ratio for 20 different networks in Gigabits with 10–200 nodes. The X-axis is the number of nodes, and the Y-axis is the number of delivered packets over whole packets in the network.

Fig. 7
figure 7

Comparing the packet delivery ratio between the LEA and other methods

According to Fig. 7, generally, in a network with a high number of nodes, the packet delivery ratio is higher due to existing more paths among the nodes. However, approximating the current network situation and tuning the transmission power of the nodes are more challenging with increasing the number of nodes in the network and require more accuracy. The LEA has a more packet delivery ratio than DA-FA, CH-DA, and BDA methods, according to Fig. 6. The evaluation of the proposed method and the other methods over 20 networks of different numbers of nodes is provided in Table 4. The results indicate that increasing the number of nodes increases the packet delivery ratio linearly, and the proposed method could deliver 0.9317 percent of the packets on average.

Table 4 Comparing the packet delivery ratio among the methods

4.3.2 Investigating the proposed method in terms of energy consumption

Figure 8 compares energy consumption in the network using the LEA and the other mentioned methods. The X-axis shows the number of nodes, and the Y-axis shows the amount of consumed energy in millijoules.

Fig. 8
figure 8

Comparing energy consumption between the proposed method and the other methods

According to Fig. 8, the consumed energy rate with increasing the number of nodes is lower in the proposed method compared to the others. In the LEA, the best path selection to send messages is approximated considering the current situation of the network; the optimal paths include passing through lower nodes but with high energy; because the transmission power of the nodes is also considered to consume lower energy and accurate transmission of the packet. Table 5 shows the consumption comparison for four different methods. The results are compared in terms of the consumed energy in KJ in networks with different numbers of nodes over 300 message transmission iterations in the network, and the amount of consumed energy of the network is measured for the number of iterations of fixed message transmission in the network.

Table 5 Comparing the consumed energy between the proposed method and the other methods

According to Table 5, the average consumed energy of the LEA in 20 different networks is 94.95 KJs and is lower than the other methods.

4.3.3 Investigating the proposed method in terms of the network lifetime

Figure 9 shows the network lifetime for the LEA and the other methods. The X-axis shows the number of nodes, and the Y-axis shows the network lifetime in seconds.

Fig. 9
figure 9

Comparing the network lifetime between the proposed method and the other methods

According to Fig. 9, the network lifetime is more for the LEA compared to the others. Table 6 shows the time when all network nodes are dead for 20 different networks. According to Fig. 9, the LEA has a higher network lifetime than the three methods DA-FA, CH-DA, and BDA. For the network lifetime, this is considered that the method which consumes more energy for sending messages through the nodes and comprises nodes with higher energy consumption certainly depletes its energy earlier. Therefore, the energy consumed by the nodes should be less to improve the network lifetime. It should be noted that the nodes with lower energy are not able to send/receive data and are considered dead. Table 6 shows the comparison between the proposed method and the others.

Table 6 Comparing the network lifetime between the proposed method and the other methods

The lifetime values inserted in Table 6 are calculated in seconds for different networks; the calculation criterion is the death of the last node of the network. The results indicate the network average lifetime in the proposed method is 1656.9 s which is higher compared to the other methods.

4.3.4 Investigating the proposed method in terms of the death time of the nodes

In this section, the LEA is compared to the BDA [24], CH-DA [29] and DA-FA [66] methods. The death time of the first node is the number of the round in which the first node of the network stops working due to energy depletion, the death time of half of the nodes is the number of the round in which half of the network nodes stop working due to energy depletion. Finally, the death time of the last node of the network is the number of rounds in which the last node stops working due to energy depletion.

Figure 10 shows the performance comparison of the methods in terms of the number of live nodes over rounds' number.

Fig. 10
figure 10

Comparing the method's performance in terms of the number of live nodes over the rounds' number

According to Fig. 10, the network lifetime based on the death time of the first node comes with a lower delay in the proposed method compared to the other methods in 100 nodes, but in the BDA method [24], the death of the last node comes with more delay.

Table 7 provides the results comparison of the proposed method and the other methods in terms of the death of the nodes. The results are yielded by averaging over the statistical society for 20 executions per algorithm in a network with 100 nodes and 400 rounds.

Table 7 Comparing the results of the methods in terms of the death time of the nodes

According to Table 7 and Fig. 10, the death time of the first node in the LEA comes with more delay compared to the other methods, but the death time of the last node in the BDA method [24] comes with more delay. Accordingly, the proposed method guarantees the network lifetime better than the CH-DA [29] and DA-FA [66] methods. This feature is of special privilege in applications in which network coverage is important. Moreover, the death time of the first node in the LEA indicates that it can be a superior model for monitoring or tracking applications that require very accurate data. In these applications, delaying the first death time is far more important than the last. On the contrary, in some applications like periodic weather monitoring, increasing the overall network lifetime (delaying the last death time) is more desirable, whereas the BDA method is superior.

5 Conclusion

A novel algorithm inspired by the Lotus flower is presented in this article. This plant's propagation method, along with its leaves' self-cleaning property, is the main inspiration for a new optimization method. Since the Internet of Things is considered one of the most important applications in today's world, reducing energy consumption in the network and increasing the lifetime of nodes (considering the energy limitation of nodes) are important challenges in this field. This research presents a new method for clustering energy-based sensors, and the cluster heads are determined using the proposed algorithm. The proposed method is compared to the recently improved methods of the dragonfly algorithm in clustering Internet of Things networks. The results indicate the LEA is better than other methods in terms of network throughput, packet delivery rate, and network lifetime (in terms of the death of the first node). It also decreases energy consumption.

The experiment results of a set of different types of benchmark functions indicate that the proposed method is the best in single optimal functions and is one of the best methods in methods with multiple optimums and high dimensions. The results of the proposed method on the hybrid benchmark functions indicate that the proposed method is not the best method among all the functions in this field. However, it provides competitive results and generally shows the best performance in Fredman's ranking test over 29 benchmark functions. Also, the efficiency of the proposed method is tested on two real-world optimization problems with multiple constraints. One example is the welded beam design optimization problem and the speed-reducer problem applied in a gearbox. It turns out that it has higher accuracy than other methods.

For future work, one may use the proposed method in discrete applications, including feature selection in data mining. Moreover, improving this method by using the fuzzy inference system to control the exploration and extraction phases could be investigated. Multi-objective optimization and high-objective optimization problems are other areas where this method could be developed. Furthermore, multi-modal optimization has numerous applications for which this algorithm could be investigated and developed.