1 Introduction

Blockchain networks (BNs) have been receiving a lot of attention lately due to the increasing popularity of cryptocurrencies like Bitcoin [11]. BNs are decentralized peer-to-peer networks that provide anonymity, auditability, and secure operations without requiring trust from a third party [77]. Because of these capabilities, BNs have been employed in various areas, such as smart manufacturing [39], smart grid [69], the Internet of Things (IoT) [79], and supply chain [8]. To implement BNs, three major technologies are used: (1) a peer-to-peer network; (2) public key cryptography with hash functions; and (3) a program (Blockchain protocol, such as consensus). To address the synchronization problem in traditional distributed databases, blockchain technologies use a distributed consensus algorithm and combine peer-to-peer networks with cryptography, mathematics, algorithms, and economic models to create an integrated infrastructure across multiple fields.

“In essence, a blockchain is a digital public ledger that records all digital transactions in a data structure known as ’Completed Transaction Blocks’ or in chronological order, which is then stored across a distributed network. The block body consists of a transaction counter and transactions. The transaction counter records the number of transactions, while the list of transactions recorded by the block is simply referred to as the transactions. The maximum number of transactions that can be stored in a block is determined by both the block size and the size of each transaction. During the process of verifying and validating transactions, the system checks whether the initiator has sufficient balance to complete the transaction, or it may be fooled by a double-spending mechanism [50], where the same input amount is used in two or more different transactions [32]. Blockchain users, also known as ’BUs’, are peers whose computational power is used to mine for blocks [37]. Once these BUs verify and validate a transaction, it is included in a block.”

In order to publish a block, BUs must expend a significant amount of their computing resources to solve a computational puzzle, known as the mining process. The BUs who are able to solve the puzzle first become the winning BUs and are given a small incentive to create a new block. Once a block has been created, a consensus mechanism is used by all peers in the network to verify the new block [72]. The mining process involves a complex anti-collision hash function query, which typically requires significant resources and high computation capacities of BUs. However, since resources are often limited, this can lead to issues with resource allocation for BUs and create challenges in managing the growth of the network.

Mobile edge computing (MEC) is a promising field that aims to enhance the computing capabilities of IoT devices by offloading processes to a MEC server [55, 68]. Numerous studies have been conducted on MEC-operated wireless blockchain networks (WBNs). For instance, [44] established a MEC-operated WBN and introduced an alternative control methodology of multipliers for resource management. However, if the earnings of the MEC service provider (MSP) are insufficient, they may choose not to provide computing services to BUs, which could ultimately lead to the WBNs being unable to function.

In a related study, [47] proposed a framework for managing resources in MEC-operated WBNs using deep learning. Additionally, [31] introduced two bidding frameworks for resource management in MEC-operated WBNs: the multi-demand framework and the constant-demand framework. In the former, an auction mechanism is designed to achieve optimal social welfare, while in the latter, an estimation method is introduced to simultaneously consider computational efficiency, individual rationality, and truthfulness.

Furthermore, [73] explored the interaction between cloud/fog providers and BUs in a proof of work-based BN using a game-theoretic approach. However, it should be noted that these studies [31, 47, 73] do not take into account the transfer delay time between the participating BUs and MEC server. If there are many BUs offloading processes onto the MEC server simultaneously, they may experience significant overlap, resulting in high transportation delays [29]. Therefore, future research should consider the impact of transfer delay time on resource management in MEC-operated WBNs to further optimize the allocation of computing resources and improve network performance.

Recently, multiple studies of resource management in BNs have been proposed. Researchers mainly consider two basic problems: mining decisions and resource allocation. The first is to determine whether a BU contributes to the mining process or not, and the last is to decide the number of resources assigned to each subscribed BU.

Rizun [54] conducted a study on mining decisions, taking into account the block space supply curve and the mempool demand curve to demonstrate how a subscribing BU selects transactions to maximize their profit without a block size limit. By considering both the orphaning risk and revenue from transaction fees, the study found that the block size corresponding to the intersection point of these two curves is the optimal size for maximizing mining profit.

Kiayias et al. [34] introduced a mining decision stochastic game in the Bitcoin network to account for the randomness of the mining process, where multiple BUs participate. BUs typically choose to engage in mining to obtain stable profits.

Houy [28] presented and analyzed two Bitcoin mining games for BUs. When deciding how many transactions to include in the block they are mining, BUs need to consider the trade-off between including more transactions to earn higher transaction fees, and including fewer transactions to reduce the time needed to propagate their block solution and increase the likelihood of their block being included in the blockchain first.

Some BUs may engage in harmful actions within the mining pool, leading to a waste of distributed computing resources and posing a risk to the effectiveness of BNs. To address this issue, a game-theoretic approach has been proposed in [66] to incentivize BUs to mine honestly. While these methods have been shown to be effective, they have only been applied to wired BNs. With the emergence of IoT devices, there has been increased attention on Wireless BNs (WBNs) operating on these devices [7]. However, IoT devices are unable to sustain the mining process on local machines.

Liu et al. [45] conducted an analysis on the dynamic selection of mining pools in BNs. The selection of mining pools was represented as an evolutionary game, and evolutionary stability was produced through theoretical analysis. Furthermore, Shijing Yuan et al. [76] proposed an optimization method for a blockchain-supported edge video streaming system. The method aims to determine the offloading mode and resource allocation to achieve an optimal balance between accuracy and energy consumption.

In this context, many researchers have utilized optimization algorithms in their studies, these methods have numerous advantages [56], including (1) self-regulation; (2) flexibility to dynamic changes; (3) the ability to estimate multiple solutions simultaneously; and (4) not requiring bounded mathematical characteristics to be implemented. The family of optimization techniques includes, but is not limited to, differential evolution (DE) [6, 57,58,59,60], genetic methods [15, 42, 53], ant colony optimization algorithms [36, 52, 74], PSO [17, 26], gray wolf algorithm [46, 70], firefly optimizer [10, 81], flower pollination optimizer [1], whale optimization optimizer [2], artificial bee colony [78], binary slime mold optimizer [3], binary pigeon-inspired optimizer [12], cuckoo search optimizer [24], moth search optimizer [23], gain sharing knowledge-based optimization technique [5], diversified sine–cosine optimizer based on DE [25], light spectrum algorithm [4], binary light spectrum algorithm [4], and Aquila algorithm [9].

This paper differs from previous works in that it takes into account both MSP earnings and transportation postponement in a MEC-operated WBN. To increase the overall earnings of all BUs, the paper optimizes the BUs’ mining decisions and estimates their resource allocation simultaneously using a modified version of the Henry gas solubility optimization (HGSO) called the Chaotic Henry single gas solubility optimization (CHSGSO) approach. When HGSO is utilized to address this issue, each individual refers to the resource allocations and mining decisions of the joining BUs. However, since not all BUs participate in mining, considering all BUs as individuals would lead to an overloaded search space and poor performance. To address this problem, the paper develops a new approach called CHSGSO, which generates a population with variable-length individuals that represent participating BUs.

The main contributions of this paper can be listed in the following points:

  • We propose a modified HGSO, in which the resource allocation to only joining BUs is encoded as an individual. An adaptive strategy is designed to tune each individual size.

  • A chaotic map has been integrated into the original HGSO to enhance the convergence rate.

  • Comprehensive experiments are applied on a set of different instances to validate the superiority of CHSGSO.

  • CHSGSO efficacy is then affirmed by doing a fair comparing with four well-known meta-heuristic methods.

The remainder of the paper is structured as follows. Section 2 introduces the problem formulation and the system model. Section 3 presents the proposed CHSGSO. The empirical results are investigated in Sect. 4. Finally, conclusions and some potential future works are presented in Sect. 5.

2 The problem formulation and system model

The MEC-enabled WBN is shown in Fig. 1, which has a group of n IoTDs, is considered as mining BUs that are involved in mining, where \(N=\{1,2,\ldots ,n\}\). If a BU plans to join mining, it needs to pay for computing resources from the MEC Service Provider and then transfer its tasks to the MEC server for the mining process. For simplicity, the mining task is a 2-tuple: \(\{B_i,C_i\}\), where \(B_i\) and \(C_i\) are the block size and the computation workload/intensity in CPU per bit, respectively.

Fig. 1
figure 1

A MEC-enabled WBN

In the considered BN, \(d = \{d_1,\ldots , d_n\}\) represents the BUs mining decision, where \(d_i=\{0,1\}, (\,i\in N )\,\) means that the \(i^{th}\) BU chooses to join or not to join in mining. So, the involved BUs number is \(n^\prime =\sum _{i\in n}d_i\). Additionally, the required resources to the joint BUs need to be allocated, i.e., computation resources \(f=\{f_1, \ldots ,f_n^\prime \}\) and transmitting power (CPU Cycles/s) \(p=\{p_1,\ldots , p_n^\prime \}\) for all participating BUs.

For the considered BN, shown in Fig. 1, the mining task is successfully executed only after the completion of the following three phases.

Offloading phase In this phase, joint BUs concurrently transfer tasks to the MEC server with a transmission rate that is stated as:

$$\begin{aligned} R_i=b \log _2\left( 1+\dfrac{p_i H_i}{\sigma ^2+\sum _{j\in n^\prime /i}d_jp_jH_j}\right) , \end{aligned}$$
(1)

where \(H_i\) represents the ith participating BU’s channel state information, and the term \(\sum _{j\in {n}^\prime /i}d_jp_jH_j\) represents the received interference from other BUs, b represents the channel bandwidth, and \(sigma^2\) represents the power of background noise. For each BU, the task transmission time \(T_i^t\) and energy consumption \(E^t_i\) of ith BU can be formulated as:

$$\begin{aligned} T^t_i=\dfrac{B_i}{R_i},\quad \forall i\in n^\prime , \end{aligned}$$
(2)

and

$$\begin{aligned} E^t_i=p_iT^t_i,\quad \forall i\in n^\prime , \end{aligned}$$
(3)

where R is the transmission rate, B is the size of the block, T is the task transmission and E is the transmitting energy consumption.

Mining phase

During this phase of the process, the MEC server is responsible for carrying out the mining tasks that have been transmitted by the participants. The energy consumption and time taken by the MEC server to execute the ith BU task are consequentially stated as:

$$\begin{aligned} T^m_i=\dfrac{B_iC_i}{f_i},\quad \forall i\in n^\prime , \end{aligned}$$
(4)

and

$$\begin{aligned} E^m_i=k_1f^3_iT^m_i,\quad \forall i\in n^\prime , \end{aligned}$$
(5)

where \(k_1\) is the efficient capacitance coefficient.

Propagation phase After completing the mining phase, if the BU executes its mining task faster than expected, the BU earns a reward. The probability of getting a reward, as a function of mining time, is given as:

$$\begin{aligned} P^m_i=\dfrac{k_2}{T^m_i},\quad \forall i\in n^\prime , \end{aligned}$$
(6)

where \(k_2\) is scaling factor.

On the other hand, if the BU executes its mining task slowly, the BU will not get a reward. Because agreement may not be obtained, the block may be rejected. In blockchain networks, blocks are made using a Poisson process, with propagation time \(T^0_i\) and with a constant mean rate (\(\lambda\)), is linearly proportional to their size \(B_i\). The ith BU’s orphaning probability is represented by:

$$\begin{aligned} P^0_i=1-e^{-\lambda ({\xi }B_i+T^s_i)}=1-e^{-\lambda (T^0_i+T^s_i)},\quad \forall i\in n^\prime , \end{aligned}$$
(7)

where \(\xi\) represents a delay factor and \(T^s_i\) represents starting time of mining. In this study, the mining task of the ith joining BU will be executed when it is received by the MEC server; hence \(T^s_i=T^t_i\).

2.1 Profit model

As previously mentioned, if the mining task is executed successfully and fast enough, a reward can be obtained by BU. BUs are rewarded with a fixed reward of \(\omega\) and a variable reward of \(\rho B_i\), where \(\rho\) is the variable reward factor. In addition, BUs consume definite computing and communication costs. Therefore, the ith BU profit is determined by:

$$\begin{aligned} F^\textrm{BU}_i=(w+\rho B_i)P^r_i(1-P^o_i)-\tau _1E^t_i-\tau _2f_i,\quad \forall i\in n^\prime , \end{aligned}$$
(8)

where \(\tau _1\) and \(\tau _2\) represent the unit costs of the computation resources and transmission energy, respectively. The total profit of all BUs is determined by:

$$\begin{aligned} F^\textrm{BU}=\sum _{i\in n^\prime }F^\textrm{BU}_i, \end{aligned}$$
(9)

Additionally, while the MEC service provider earns a profit by selling computation resources to the BUs, it should pay for the costs of both the no-load \(E_o\) and mining energy consumption. So, the MSP profit is determined by:

$$\begin{aligned} F^\textrm{MSP}=\sum _{i\in n^\prime }{(\tau _2f_i-\tau _3E^m_i)-\tau _3E_0}, \end{aligned}$$
(10)

where \(\tau _3\) is the unit cost of consumed energy.

2.2 Problem formulation

For the investigated BN, the mining decision (d), transmission power (p), and computation resources (f) are all optimized at the same time to maximize the overall profit of all BUs. The profit model is expressed as a maximization problem as follows:

$$\begin{array}{ll} \underset{m,p,f}{{\max }} & {F^\textrm{BU}=\sum _{i\in n^\prime }F^\textrm{BU}_i}, \\ \text {s.t.} & C1: d_i\in \{0,1\},\quad \forall i\in n^\prime ,\\ &C2:f^{\min }\le f_i \le f^{\max },\quad \forall i\in n^\prime , \\ &C3:p^{\min }\le p_i \le p^{\max },\quad \forall i\in n^\prime , \\ &C4:\sum _{i\in n^\prime }f_i\le f^\textrm{total},\\ &C5: T^t_i+ T^m_i+ T^o_i\le T^{\max }_i,\quad \forall i\in n^\prime ,\\ &C6: F^\textrm{MSP}\ge 0.\\ \end{array}$$
(11)

where C1 denotes that each BU can choose to join mining or not; C2 specifies the maximum and minimum computation resources assigned to each joint BU; C3 ensures that the transmission power assigned to each BU fall within the maximum and minimum allowable values; C4 represents the overall computation resources assigned for each BUs, involved in mining, cannot overtake the overall computation resources of the MEC server; C5 guarantees that the overall time of propagation, mining, and offloading can not overtake the limitation of the maximum time; and C6 guarantees that the MSP profit will be greater than zero.

In the studied BN, We are under the assumption that all BUs are identical and have the same transmission power and computation resource ranges.

3 Proposed improved Henry gas solubility optimization

The HGSO algorithm [27] is based on a physical property known as Henry’s law [63], which governs the solubility of materials. Pressure and temperature are two factors that play a significant role in this rule. The solubility of gases reduces with increasing temperature. While the gas becomes more soluble at higher pressures.

3.1 Henry’s law

Henry’s law is a gas law that was formulated by William Henry in 1803 [40]. According to Henry’s law, “When the temperature is constant, the amount of gas dissolved in a given type of liquid is directly proportional to its partial pressure above the liquid”. Therefore, Henry’s law is highly dependent on temperature. Staudinger and Roberts [63] proposed that the gas solubility (\(S_\textrm{g}\)) is directly proportional to the gas partial pressure (\(P_\textrm{g}\)), as described by the following equation:

$$\begin{aligned} S_\textrm{g}=H \times P_\textrm{g}, \end{aligned}$$
(12)

where H represents Henry’s constant, which is specific to a particular gaseous solvent composition at a given temperature, and the partial pressure of the gas is represented by \(P_\textrm{g}\).

In addition, the effect of temperature dependence on the constants of Henry’s law must be taken into account. The constants of Henry’s law change with changes in the system temperature, which can be represented by the following equation of Van’t Hoff as shown:

$$\begin{aligned} \frac{d \ln H}{d(1 / T)}=\frac{-\nabla _\textrm{s o l} E}{R}, \end{aligned}$$
(13)

where \(\nabla _\textrm{s o l} E\) represents the dissolution enthalpy, R represents the constant of the gas and A and B are two factors for T dependence of H. So, Eq. 12 can be integrated as the following equation:

$$\begin{aligned} H(T)=\exp (B / T) \times A, \end{aligned}$$
(14)

where H represents a function of parameters A and B. Alternatively, an expression can be created using H at the reference temperature \(T=298.15~\textrm{K}\).

$$\begin{aligned} H(T)=H^{\theta } \times \exp \left( \frac{-\nabla _{\text{ sol } } E}{R}\left( 1 / T-1 / T^{\theta }\right) \right) , \end{aligned}$$
(15)

The Van’t Hoff equation is valid when \(\nabla _\textrm{s o l} E\) is a constant, so, Eq. 15 can be reconstructed as the following equation:

$$\begin{aligned} H(T)=\exp \left( -C \times \left( 1 / T-1 / T^{\theta }\right) \right) \times H^{\theta }, \end{aligned}$$
(16)

3.2 Inspiration source

Henry’s law was first presented by J.W. Henry in 1800. Generally, the maximum amount of solute that can dissolve in a given amount of solvent at a given pressure or temperature is called solubility [48]. So, HGSO was inspired by Henry’s law behavior. According to the above Eqs. 12 through 16, Henry’s law can be utilized to estimate the solubility of low-solubility gases in liquids. In addition, pressure and temperature are the two parameters that affect solubility; at higher temperatures, gases are less soluble, but solids become more soluble. As for pressure, with increasing pressure, the solubility of gases increases [14].

3.3 Mathematical model of Henry gas solubility optimization

The mathematical procedures of the HGSO algorithm are described in this subsection as follows [27]:

Step 1: Equation 17 is used to create initial population of candidate solutions with N gases:

$$\begin{aligned} x^{(0)}_i = lb_i + r \times (ub_i - lb_i), \end{aligned}$$
(17)

where \(x^{(0)}_i\) is the initial position of the ith gas, and \(lb_i\) and \(ub_i\) are the position’s lower and upper limits, respectively, for the ith candidate solution. r is a randomly generated real value in the range [0, 1].

Step 2: Candidates from the population are organized into groups that are referred to as clusters. Each cluster has an equal number of candidates that have the same attributes. The equation referenced in Eq. 18 is used to initialize these properties:

$$\begin{aligned} H^{(0)}_j = l_1 \times rand_1, \,\,P^0_{i,j} = l_2 \times rand_2,\,\,C^0_j = l_3 \times rand_3, \end{aligned}$$
(18)

where \(H^{0}_j\) denotes the initial value of Henry’s coefficient for jth cluster, \(P^{(0)}_{i,j}\) represents the ith gas initial partial pressure in jth cluster, and \(C^{(0)}_j\) represents the initial constant value of cluster j. l1, l2, and l3 are fixed values of \(5\times 10^{-02}\), 100, and \(10^{-02}\), respectively.

Step 3: The fitness value of each cluster’s gas particles is computed, and the best \(x_{j,{best}}\) cluster is assigned. In this stage, all candidate solutions are sorted according to fitness to get the global best solution \(x_{best}\).

Step 4: As the applied partial pressure on gas particles changes during each iteration, Henry’s coefficient \(H^{(t+1)}_j\) is updated according to Eq. 19:

$$\begin{aligned} \nonumber H^{(t+1)}_j& = H^{(t)}_j\times e^ {-C_j \times (1/T^{(t)} - 1/T^\theta )},\\ T^{(t)}& = e^{(-t/t_{\max })}, \end{aligned}$$
(19)

where \(H^{(t)}_j\) is Henry’s constant for cluster j in iteration t, \(T^\theta\) is a fixed parameter with value 298.15, \(T^{(t)}\) represents the temperature at iteration t, and \(t_{\max }\) is maximum iterations.

Step 5: During the tth iteration, Eq. 20 is used to change the solubility \(S^{(t)}_{i,j}\) of the ith gas particle in the jth cluster:

$$\begin{aligned} S^{(t)}_{i,j} = K \times H^{(t+1)}_j \times P^{(t)}_{i,j}, \end{aligned}$$
(20)

where \(P^{(t)}_{i,j}\) represents the applied pressure on ith gas particle in jth cluster, and K is a fixed value.

Step 6: in this step, the ith gas particle position of the jth cluster is updated using Eq. 21 for iteration \(t = t + 1\).

$$\begin{aligned} x^{(t+1)}_{i,j}& = x^{(t)}_{i,j}+F\times r_1 \times \gamma \times (x_{i,best}-x^{(t)}_{i,j} )\nonumber \\ & \quad +F \times r_2 \times \alpha \times (S^{(t)}_{i,j}\times x_best-x^{(t)}_{i,j}),\nonumber \\ \gamma& = \beta \times exp(\frac{-F^t_{best}+\epsilon }{F^t_{i,j}+\epsilon }), \epsilon = 0.05, \end{aligned}$$
(21)

where F is used for controlling search direction, \(\gamma\) is the interaction ability of gas within its cluster and \(\alpha\) is effect of other gas on ith particle. \(r_1\) and \(r_2\) are randomly generated values between [0,1], and \(\epsilon =0.05\).

Step 7: since HGSO is heuristic algorithms, it may be optimized locally. Therefore, Eq. 22 is used to rank and number of worst solutions \(N_w\) for re-initialization:

$$\begin{aligned} N_w=N\times rand\times (c_2-c_1)+c_1, \end{aligned}$$
(22)

where N is the total number of individuals in the population and rand is a random number between 0 and 1. \(c_1\) and \(c_2\) are constants that specify the percentage of worst solutions. Equation 17 is used to reinitialize the positions of the worst solutions selected in this process.

Algorithm 1 outlines the pseudo-code of the HGSO algorithm’s step-by-step structure.

figure a

3.4 Chaotic improved Henry gas solubility optimization

3.4.1 Chaotic systems

Dynamic systems are mathematical functions that describe the movement of a point in geometrical space over time. A dynamic system has a state for a given time and can be represented using a vector mathematical function with an appropriate state-space model. The evolution rule allows us to determine the next state of a dynamical system using the current state and its behavior. Most dynamic systems are deterministic, but some systems generate stochastic random events or have an incomplete description. A dynamic system can be completely modeled for predicting its future behavior by an analytical solution that is time-dependent. Dynamic systems can be further classified into two types: linear dynamic systems and nonlinear dynamic systems. A nonlinear dynamic system is a system whose output is not proportional to the changes made in the input. Linear dynamic systems are dynamic systems whose evaluation is a linear function, i.e., changes in the output are linearly proportional to the changes in the input. Chaotic systems are a type of nonlinear dynamic system. Chaotic maps are a field of study in mathematics where dynamic systems produce a random state that appears irregular but is governed by the initial seed conditions.

To analyze the chaotic behavior in dynamic systems, range bifurcation diagrams are often plotted. These diagrams illustrate the relationship between chaotic states and their corresponding control parameters. The Lyapunov exponent is a parameter that plays a crucial role in determining whether a chaotic map is useful in pseudo-random generators. This exponent is highly sensitive to slight changes in the seed parameters, such as initial conditions and control parameters. The idea of using chaotic systems instead of random processes has been noticed in several areas including computer science, economics, engineering, etc. [19, 33, 38, 51, 62, 64] One of these areas is the optimization theory. In random-based optimization algorithms, the role of randomness can be initialized using chaotic dynamics instead of random processes. Chaotic maps can be classified into two categories: 1D [30, 75, 80] and multi-dimension [13, 16, 21, 22, 67].

1D chaotic schemes have modest structure, and simple dynamic characteristics, and are easy to implement. To generate the Pseudo-random Sequence (PRS), only one variable and a few parameters are used. On the other hand, the 2D chaotic maps possess two variables and a greater number of control parameters. In the present study, the authors have used a 1D Sin map. The structure of the Sin map is defined as [23]:

$$\begin{aligned} x_{i+1}=\frac{\mu }{4}\sin {({\pi }x_i)}, \end{aligned}$$
(23)

where \(\mu\) is the control parameter with a range of \(u\in [0,4]\), As shown Fig. 3, one can see that only when the parameter \(\mu \ge 3.57\) can chaotic behavior occur in the Sine map. The bifurcation diagram Fig. 2 depicts the possible state values of the system under each parameter. Corresponding to a value of the system parameter, if there are infinite state values, the system with the parameter has chaotic behavior.

Fig. 2
figure 2

The bifurcation diagram for the sin map

Fig. 3
figure 3

The Lyapunov exponent for the sin map

As remarked in [35, 49, 61, 65], replacing a random variable with a chaotic sequence enhances the optimization algorithm’s global convergence speed and exploration/exploitation.

In the HSGO, the standard procedure Algorithm 1, used for updating solutions, has at least two clusters and two gases in each cluster, and all gases have the same search space dimensions. As shown in Eq. 21, the balance between the exploration and exploitation phases is controlled by fine-tuning control parameters [27]:

  1. 1.

    the solubility of gas j in cluster i \(S_{i,j}\), which is based on the time of iteration;

  2. 2.

    the ability \(\gamma\) of gas j in cluster i to interact with the gases in its cluster, which aims to transfers the search individuals from global to local phase and vice versa; and

  3. 3.

    the flag F that changes the direction of the search agent and provides \(\pm {ve}\) diversity.

It can be observed that the system described by (11) is considered as a nonlinear optimization problem with mixed variables since p and f are continuous variables while d is binary. Consequently, this problem is hard to be solved by original HGSO algorithm (11). In this paper, the authors proposed a modified HGSO algorithm to handle the system (11). The steps of the proposed algorithm are described in the following subsections.

3.4.2 Individual encoding

It should be noted that if a BU does not choose to join the mining process (i.e., \(d_i = 0\)), this BU does not require sending its task to the MEC server or acquiring computing resources for mining. In this situation, allocating resources for these BUs is not needed. Thus, the generally utilized encoding scheme in the original HGSO creates redundancy in the search space, which can degrade the performance. A new encoding schema is proposed to solve this problem.

In the proposed individual encoding 4, each individual contains continuous variables representing the resource allocations for each BU that joins mining. The length of each individual, which represents participating BUs only, is variable in size. Thus, the optimal solution is found in the 2D search space.

3.4.3 Agent structure

The proposed HGSO algorithms consist of different solutions called Agents. Each agent contains a single individual (i.e., single gas). Agents have different lengths depending on how many BUs are chosen at random to take part. The structure of the population of agents is presented in Fig. 4.

Fig. 4
figure 4

Representation of proposed encoding schema and population of agents

3.4.4 Initialization

In the initialization, the participating BUs are randomly chosen for each agent with a different number of BUs. Then, the resource allocation initial values of all BUs are randomly generated. The Fitness function value for each BU, total fitness, and degree of constraint violation are evaluated as presented in Algorithm 2. Then, the best agent is returned for further comparison. The process of the initialization phase is described in Algorithm 2.

figure b

3.4.5 Update p and f

While searching for the best agent, the position (i.e., \(f_i\) and \(p_i\)) in iteration \(t+1\) of the ith agent is updated using Eq. 24 as follows:

$$\begin{aligned} \nonumber x^{(t+1)}_{i,j}& = F\times {x^{(t)}_{i,j}}\times (b1+b2\times \sin (\pi \times {x^{(t)}_{i,j}}))\\ & \quad +F \times r_2 \times \alpha \times (S^{(t)}_{i,j}\times x_{i,j}-1), \end{aligned}$$
(24)

where the term \((b1+b2\times \sin (\pi \times {x^{(t)}_{i,j}}))\) provides the balance between exploration and exploitation by generating new positions due to its chaotic nature. b1 and b2 control the randomly generated position range.

3.4.6 Update mining decision d

For the mining decision optimization, we need to select and update BUs to be involved in the mining process. The following two steps are performed to optimize the mining decision:

1-Generate new gas agents In this step, all agents are sorted according to fitness, and the best agent is selected. Then, using the predetermined selection probability \(Ps_r\), a number of the worst agents are chosen to be replaced by new agents with the same number of BUs as the best agent. The details of this process are described in Algorithm 3.

figure c

2-Apply mutation To make the agents more diverse and give them a way to get out of a local optimum, a small number of BUs need to be replaced using a mutation operation with a probability of \(Pm_r\) that has already been set. The steps for applying mutation are listed in Algorithm 4.

figure d

4 Experiments and discussion

In this section, the results from different computational experiments with the proposed CHSGSO algorithm are compared with those obtained from other competing meta-heuristic algorithms in the literature. This section describes the parameter settings and performance measures adopted to validate the superiority of the proposed algorithm.

4.1 Algorithms for comparison

The performance of the proposed GHSGSO is compared with the following algorithms.

  • \(\hbox {ACO}_\textrm{MV}\) [41]: \(\hbox {ACO}_\textrm{MV}\) is a continuous optimization algorithm that has a continuous relaxation and a categorical optimization approach. Together, these approaches enable \(ACO_{MV}\) to address resource allocation and mining decisions.

  • DE [18]: The DE algorithm is one of the most robust evolutionary algorithm versions because of its fast convergence, simplicity, ease of use, and the same values of its parameters (population size, crossover rate, and scaling factor) can be used to tackle different optimization problems. DE was originally introduced to address continuous optimization problems. Modifications are needed to address the problems of both resource allocation and mining decisions.

    Integer constraints in mining decisions are handled by changing a continuous value to its nearest integer.

  • DEMiDRA [71]: Every member shows the resource allocation of a joint BU and the resource allocation of all joint BUs generates the entire population. Then, the DE algorithm is used for resource allocation optimization. To optimize the mining decision, they have to choose BUs to join in mining and update the joining BU’s number. Since the joining BU’s number equals the population size, they modified the update of the joint BU’s number into the organization of the population size and generated an adaptive approach. Additionally, a tabu approach is presented to avoid unfavorable BUs joining mining.

  • BOToP [43]: Firstly, BOToP obtains the optimum solution of the mixed-variable optimization problem by solving a constrained modified bi-objective optimization problem. Second, DE is utilized to solve the original optimization problem with mixed variables to find the best solution.

4.2 Environment and parameter settings

In this paper, the proposed CHSGSO method was compared to four promising meta-heuristic methods. These methods are \(\hbox {ACO}_\textrm{MV}\), DE, DEMiDRA, and BOToP. For every method, 30 independent runs were executed. Then, the mean values of profit were recorded over the 30 runs. For a fair comparison, the maximum number of fitness evaluations was set at 10,000 for all methods. The common settings of all methods, along with the parameter settings for each method, are explained in Table 1.

Table 1 Parameters setup for all methods

The common setups of the considered BN are presented as follows. All BUs are allocated randomly in a square space of \(1000\,\hbox {m} \times 1000\,\hbox {m}\) and the MEC server is placed in the middle of this space. Ten examples with multiple numbers of BUs \((i.e., n = 50, 100,\dots ,500)\) are studied to examine the proposed CHSGSO performance. Other setups are shown in Table 2.

Table 2 The considered BN common settings

To execute all the experiments in this paper, MATLAB was utilized on a computing environment with a Dual Intel® Xeon® Gold 5115 2.4 GHz CPU and 128 GB of RAM on the operating system Microsoft Windows Server 2019.

4.3 The effect of group size (GS)

By implementing different experiments, the effect of the group size GS on the proposed CHSGSO algorithm performance is verified. This is achieved by establishing the value of \(GS=\{5, 10,11,12,13,14,15, 16,17, 18,19,20, 25,30\}\). Box plots of average accumulative profits for CHSGSO with different instances are presented in Fig. 5. The Friedman test was carried out to rank all the variants, with the results reported in Table 3 for the mean results. It can be seen from Table 3 that the variant with \(\hbox {GS}=14\) is the best-obtained result. Based on the Friedman test, the variant with \(\hbox {GS}=14\) is preferred and it will be used throughout the paper.

Fig. 5
figure 5

Group size effect for a 50; b 100; c 150; d 200; e 250; f 300; g 350; h 400; i 450; and j 500 BUs

Table 3 Average rankings of the variants according to Friedman test

4.4 Comparison with four counterparts

The results obtained from CHSGSO and the competing algorithms over 30 independent runs are presented in Table 4, where “AVG” and “STD” denote the mean and standard deviation of the total earnings of all BUs. Sequentially, the ratios within the square brackets denote the improved rate of CHSGSO versus the rival algorithms. Wilcoxon’s rank-sum test [20] is used for assessing the significance of the proposed CHSGSO against counterparts. Wilcoxon’s rank-sum statistical analysis is conducted at a 0.05 significance level. Table 4, \(``\approx ''\), \(``\downarrow ''\), and \(``\uparrow ''\) show that CHSGSO performs similarly, worse than, and better than every counterpart, sequentially.

Table 4 Comparisons of CHSGSO against a few promising algorithms
Fig. 6
figure 6

Evolution of the mean total gain obtained by the proposed CHSGSO and other algorithms for a 50; b 100; c 150; d 200; e 250; f 300; g 350; h 400; i 450; and j 500 BUs

From Table 4, it is observed that the proposed CHSGSO achieves the best mean cumulative profits out of its four counterparts in each case. It should be noted that at \(n\ge 200\), the mean cumulative gain achieved by CHANGES is much higher than that derived from three of its counterparts (ACOMV, DE, and BOToP) and slightly higher than that of the DEMiDRA algorithm. In terms of improving performance, CHICAGO outperforms its four counterparts in each case. Specifically, against DE, CHICAGO achieves greater than 100% performance enhancement in every case unless at \(n = 50\). At \(n\ge 200\), the enhancement rate of CHSGSO exceeds 200%. Against BOToP, CHICAGO achieves greater than 100% performance enhancement in every case unless at \(n = 50\). At \(n\ge 250\), the enhancement rate of CHSGSO exceeds 200%. Against ACOMV, when \(n\ge 200\), CHSGSO obtains an improvement rate greater than 115%. In the end, CHSGSO always gets a slightly better rate than the DEMiDRA algorithm. As stated in Wilcoxon’s rank-sum statistical analysis, CHSGSO is statistically more reliable than its four counterparts in each case.

Figure 6 presents the growth of the average accumulative gains derived from ACOMV, DE, DEMiDRA, BOToP and the proposed CHSGSO when n = 50, 100, 150, 200, 250, 300, 350, 400, 450, and 500. As shown in Fig. 6, CHSGSO achieves better average accumulative gains than all of its counterparts. Specifically, against ACOMV, DE, and BOToP, CHICAGO achieves higher average accumulative gains in all instances. Against DEMiDRA, CHICAGO achieves slightly higher average accumulative gains in all cases unless at \(n = 150\) where the average cumulative gains of the algorithms are equal.

4.5 Insights and real-world applications for proposed approach

One of the critical components of blockchain technology is the verification of transactions or blocks in and IoT and blockchain networks. This process demands an important amount of energy, making it a resource-intensive operation. Consequently, there is a continuous requirement for the optimization of gas usage in those networks. The gas optimization algorithm that has proven to be effective is the Improved Henry Gas Optimization method, which depends on the Henry Gas Law that uses the solubility of a gas in a liquid as directly proportional to the partial pressure of the gas, given a constant temperature. The proposed approach by predicting the gas consumption of the network nodes, allowing more accurate estimations of gas fees required for transactions in the blockchain network.

The approach has various real-world applications, particularly in the financial sector, where blockchain networks are widely utilized. For instance, in a peer-to-peer (P2P) lending platform, users lend and borrow money from each other. The blockchain network that supports the platform should operate efficiently with minimal gas usage to assert that the platform’s users benefit from the P2P lending platform. Moreover, the proposed approach in smart contract deployment allows developers to develop and deploy more efficient smart contracts that consume fewer resources while delivering optimal performance. For example, smart contracts deployed in the healthcare sector can record patient information and generate automated reminders, among other things, without consuming more gas than necessary.

Furthermore, the proposed algorithm is crucial for Proof-of-Stake (PoS) blockchain networks, where stakeholders who hold a significant number of tokens are responsible for verifying transactions instead of miners. Since PoS models do not require high computational power, the Improved Henry Gas Optimization method can significantly reduce gas usage for PoS-based blockchain networks. In conclusion, the proposed approach has proven to be an effective technique in reducing the gas fees required for transactions in blockchain networks. The use of this technique could significantly reduce the energy consumption of blockchain transactions, making blockchain networks more sustainable and environmentally friendly. The implementation of this method also has numerous real-world applications, particularly in finance, healthcare, energy, cybersecurity and IoT systems, where blockchain networks are widely employed.

5 Conclusions and future directions

In this study, an improved HGSO approach (termed CHSGSO) is presented to jointly improve the resource allocation and mining decisions for MEC-enabled wireless BNs. First, the resource allocation to only participating BUs is encoded, as the BU who decides to participate is encoded as an individual. An adaptive strategy is designed to tune each individual size. Following that a chaotic map was integrated into the original HGSO to improve the convergence rate. Finally, CHSGSO was implemented on a group of instances with various scales and compared to ACO, DE, DEMiDRA, and BOToP. The empirical outcomes verified the efficiency and significance of the proposed CHSGSO. It is noteworthy that this study considers that IoT devices are homogeneous in the utilized BNs.

In future studies, the heterogeneous BNs will be examined. Also, we will investigate the proposed algorithm’s performance in solving other real-optimization problems and optimization problems with more than objective functions.