Introduction

The everyday needs of society are facilitated using cloud computing [1]. Cloud computing system uses master-slave infrastructure among service providers. The client users access cloud computing on-demand to access the service using “pay-as-you-go” [2]. Cloud services are assessed anywhere at any time through the internet. Besides their benefit, cloud computing consumes high latency and higher network bandwidth, since data centers are available far from the client user. A novel architecture is to integrate fog computing in the cloud environment to eradicate the above-said issue. The idea of fog computing extends direct communication among edge nodes of the network [3]. Fog computing network consists of fog servers with a wireless communication device, data storage memory and computational unit [4]. The energy-efficient resource utilization plays a significant role in this network. Therefore, any deviation in cloud data might result in a substantial change in energy consumption, affecting starvation. This situation can be eradicated by efficiently scheduling the jobs into the fog server. But it is observed that dynamic job scheduling is tricky due to complicated task flow and perplexing issues [5].

Job scheduling problem (JSP) is an NP-hard combinatorial optimization problem due to various numbers of all feasible solutions is t!pm where t denotes the count of jobs/tasks, and pm specifies the total count of physical machines. Individual jobs deal with the unique operation sequence. The main aim of job scheduling is to reduce the response time and maximise the company’s revenue. However, the JSP suffers two main challenges: choosing an adequate facility and determining the best order of operations to reduce the makespan. Several researchers have contributed to solving job scheduling using mathematical models, namely exact, heuristic and meta-heuristic approaches. Firstly, Exact strategies like the dynamic approach, branch and bound method, and component and price approaches are adequate to solve the small-scale problem and to provide optimal solutions. However, these exact approaches are time-consuming and suffer to solve large-scale real-time issues. The minor JSPs problems can be optimally solved using the CPLEX application [6].

In the real-time, most JSPs are large scale and generic mathematical models cannot guarantee optimal solutions. Still, they can provide near-optimal solutions concerning colossal time consumption. Recent research has used meta-heuristic algorithms, namely evolutionary algorithm and swarm intelligence algorithm, to solve the JSPs problem. A hybrid Genetic algorithm [7] was utilized to minimize the maximum completion time. However, the model fails to provide the optimal solution due to poor convergence. The multi-objective artificial bee colony algorithm [8] was introduced to address the multi-objectives, assignment vector, job permutation and machine speed selection. The Pareto archive particle swarm optimization algorithm [9] combines the Crowing distance-based archive and mutation process to handle the JSP issues. Due to the method’s limited exploration ability, it becomes stuck in local optima. In [10], the ant colony method was used to overcome the problems with flexible job shops. The outcomes demonstrate that the strategy produces superior results to the compared algorithms.

The hybrid genetic and cuckoo search technique devised by Singh Satyendra et al. [11] produces better results in small-scale instances but falls short in medium- and large-scale ones. Wang et al. [12] used hybrid discrete differential evolution to handle task scheduling challenges and reduce the maximum turnover time. However, the algorithm is significantly more complicated because of the problem with the solution update. Lu et al. [13] suggested a cellular grey wolf optimisation to reduce cycle time in blocking flow shops. To strengthen the search process by adding new control factors, the author of [14] recently presented hybrid whale optimization with the Levy flight and DE algorithm. They also used the technique to shorten the JSP problem’s turnaround time. The spotted hyena optimizer was devised to address the discrete work sequence difficulties [5]. Atay et al. [15] suggested a clonal selection algorithm to address the JSP difficulties. The proposed approach yields superior results in terms of reducing completion time.

The main objective of the flower pollination algorithm [16] is to shorten the time complexity of task sequence issues. The author in [17] introduced the bi-objective optimization technique to lower the makespan and cost factors. The author also used the goal programming method to submit multi-objective task sequence assignments in cloud-fog computing [18]. The simulation results offer superior outcomes in terms of access level control, deadline measures, and service delay times. A modified pigeon-inspired optimization technique [19] was applied to deal with job sequence issues. The results of the experiment outperform the compared algorithms. However, due to the complexity of the algorithms, task assignment takes a lengthy time.

Social Spider Optimization (SSO) technique is a recently introduced swarm intelligence algorithm that mimics the social conduct of a spider. The technique deliberates the working principle of spiders concerning the biotic actions of cooperative groups. The SSO approach segregates the total population into numerous search agents, namely male and female spiders and adequately processes the specialized operators for each search agent. These operators aid the algorithm in trade-off the search process towards finding the optimal solutions. Initially, SSO was introduced to handle global optimization problems [20]. In later work, some researchers modified the generic SSO algorithm to solve binary optimization problems, namely clustering and classifications [21]. Further, the generic SSO algorithm has been utilized for several complex problems, namely constraint optimization problems [22], load balancing in a grid environment [23], clustering analysis [24], and multi-level thresholding problems [25].

The major highlights of this work are stated as follows;

  • Our proposed technique integrates the dynamic opposition-based learning technique into the social spider algorithm to enhance the exploration capability and to maximize the convergence rate.

  • This novel DOLSSO technique handles job scheduling issues in cloud environments. It enhances the diversity of scheduling solutions and reduces the local optimal struck.

  • Extensive experimentation was conducted using FogSim with two different test scenarios to validate the efficacy of the proposed system in handling the job scheduling problem.

  • The solution quality of the proposed algorithm is compared with five recent state-of-art meta-heuristic algorithms.

  • The results show that the proposed method achieves 10% - 15% better CPU utilization and 5%-10% less energy consumption than the other techniques.

The rest of the article is structured as Literature work section deliberates existing works in the literature on fog computing infrastructure for handling scheduling problems. The JSP in FOG computing section discusses the system prototype and Proposed methodology section presents the problem definition and shows a representation of the SSO algorithm to solve job scheduling problems, and Experimental analysis section illustrates the simulation analysis. Finally, Conclusion section concludes the work with its outcome and future directions.

Literature work

The scheduling issue is the sharing of present resources concerning the count of tasks/jobs. The JSP is a significant problem and complex, like the scheduling problem. The job scheduling problem on a heterogeneous system is considered an NP complex problem [26]. Graph partitioning strategy is adapted to balance the load in fog computing infrastructure [27]. The authors used dynamic graph partitioning for load balancing and cloud automation technology for building virtual machines. Data stream processing [28] is implemented in fog computing with QoS aware scheduler, and it will monitor incoming and outgoing data for node utilization and inter-node communication. Load balancing in fog computing creates an issue with the quality of experience [29]. The authors proposed a reduced task scheduling algorithm to process load balancing in fog computing. Based on the scheduling process, resources are allocated to serve fog nodes called small cells. Van den Boss et al. [30] focus on the cost-efficiently schedule based on deadline constraint heuristic on public and private clouds. The budget constraint schedule is implemented in the cloud environment and is based on the advantage of comparative methodology. This algorithm suits extensive scale networks due to its higher complexity [31].

In a standard JSP problem, there is the job block which determines the tasks and a block of physical systems with its resources [32]. In JSP, the general strategy is to process each job in the sequence of physical machines with the allocated resource constraints. However, allotted resource constraints are not sufficient in real-world situations. Economic factors and globalization force the IT industries to sustain by modifying single-standard facilities into multi-facility production. Hence, several researchers have started to perform their research on JSP, which is a much more perplexing problem and schedule jobs by minimizing production cost and computational time [33].

Based on recent research works, it is examined that several researchers have worked on single objective constraints such as reducing makespan. Further, in multi-objective restrictions such as reducing makespan, increasing workload, and overall power utilization of companies). Hongtao Tang et al. [34] introduced hybrid teaching and learning-based approach to solving the scattered and casting job scheduling issue. The author presented a three-layer coding method and five neighbourhood formations to handle the local search process in the algorithm. A real-time casting enterprise case study evaluates the algorithm’s performance. The observed results are compared with the six state-of-art metaheuristic algorithms to prove their efficacy. Hui Li et al. [35] proposed a hybrid differential evolution algorithm to tackle adaptable work planning by reevaluating tasks and job priority limitations. Chromosome encryption and decryption strategy are introduced to handle the real-time constraints and to attain a high-quality initial population. Further, genetic operators with self-adjusting conditions are incorporated to widen the search regions and increase the convergence rate.

SSO algorithm is a recently proposed metaheuristic algorithm that works based on the working principle of the social spider. Erik et al. [22] proposed an SSO algorithm to solve global continuous optimization problems. A robust clustering approach using SSO with a chaotic technique was presented in 2018 [36]. The literature review discussed above deliberates that few works are carried out to solve JSPs in fog computing using meta-heuristic techniques. In addition, none of the researchers has utilized the SSO algorithm in literature to handle the JSP issue. Hence, we introduced a novel discrete SSO to run JSP, and the outcome specifies that DOLSSO has attained a better result than existing methods.

The JSP in FOG computing

In this section, the overview of JSP is discussed with the set of jobs, sequences and challenges in JSP. Later, the system design for fog computing infrastructure is illustrated. In addition, the mathematical formulation of large-scale JSP is represented.

The overview of JSP

JSP is considered a significant scheduling issue that needs to process the sequence of jobs on the system or server by considering the number of constraints and objectives. A \(\mathcal{n}\times \ell\) JSP can be expressed as a set of \(\mathcal{n}\) jobs \(\psi =\left\{{\psi}_1,{\psi}_2,\dots, {\psi}_{\mathcal{n}}\right\}\) that is executed on a set of fog servers Υ = {Υ1, Υ2, …, Υ}. Each task ψi is expressed by order of \({\mathcal{n}}_i\) process \({P}_{ij}\left(j=1,2,\dots, {\mathcal{n}}_i\right)\) to be processed in a sequence completion order of jobs in a scheduled list. The challenge of JSP is to identify the adequate server for each process (known as Server selection) and the process execution order on the server (known as process ordering). According to the constraints of server election and job execution on one or more servers. The JSP is classified into three major classifications: JSP, partial-JSP and total-JSP. In JSP, the jobs are executed only on one elected server, whereas partial-JSP jobs can be performed on one or more partial machines. Finally, total-JSP is hugely flexible and can be implemented on any number of servers, as shown in Fig. 1.

Fig. 1
figure 1

a JSP (b) P-JSP (c) T-JSP

System design

Fog computing infrastructure consists of \(\mathcal{n}\) number of the fog server machine. In our architecture, the cloud administrator is responsible for partitioning ψ job into T tasks. The decomposition of employment into the scheme is based on the number of available Fog server machines. We propose an SSO algorithm executed by the cloud administrator to ensure optimal scheduling in the Fog server.

Job scheduling is represented by a Directed Weight Graph (DWG); DWG is a weighted-node directed graph denoted by DWG = (T, E, F, μ, δ). T represents a set of tasks to be performed, and E indicates data dependencies among tasks. Let cloud consist of Υ set of Fog server, Υ = {Υ1, Υ2, …, Υ}. There are \(\mathcal{n}\) jobs submitted to schedule and executed by the Fog server. The jobs are portioned into tasks, and each assignment is apportioned to one fog server to schedule. Let Job ψ be partitioned into tasks and represented as \(\psi =\left\{{\psi}_1,{\psi}_2,\dots, {\psi}_{\mathcal{n}}\right\}\), where \(\mathcal{n}=\mid \textrm{Y}\mid .E\) represents the set of edges among nodes in T.

Definition 1

In DWG, assume any task can be processed on any Fog server machine in the formulated architecture. The numeral of edges in the graph is denoted by |E| ⊆ |Υ| × |T|. Each task Ti is processed on the Fog server machine concerning task execution time and allocated memory to process, and it is represented as, \({T}_i=\left[\begin{array}{c}{\mu}_i\\ {}{\delta}_i\end{array}\right]\). The CPU execution time of the task Ti on Fog, server is represented as μi. δi is used to represent allocated memory for the task Ti to perform.

Definition 2

In DWG, input for the graph is assigned using the Initialization Matrix (IG) IG = (Υi, j)n ∗ n. The initialization matrix consists of i task to be executed on j fog server machine. The element in the initialization matrix is one if the job is assigned to the fog server or else it is 0, and it can be represented as Υi, j ∈ 0, 1.

$${\textrm{Y}}_{i,j}=\left\{\begin{array}{c}1,\kern0.5em if\ task\ is\ allocated\ to\ Fog\ server\\ {} machine\kern1em \\ {}\kern10em \\ {}0,\kern0.5em otherwise\kern11.75em \end{array}\right.$$
(1)

The objective of job scheduling is minimizing the total cost of execution time on processing tasks and, thus Optimal Scheduled Matrix (OG) is given as \({O}_G={\left(\begin{array}{c}{\textrm{Y}}_{i,j}\end{array}\right)}_{n\ast n}\).

The mathematical formulation of large-scale JSP

This section discusses the mathematical formulation of the large-scale job scheduling problem. Large-scale JSP is an extension of JSP in the number of jobs and machines. Therefore, Large scale JSP consists of the same mathematical model of JSPs. The constraints and objective functions of large-scale JSP are mathematically modelled and given as follows.

  1. 1)

    Once the ith job of jth the task is assigned to the th server, it cannot be interrupted until it completes its execution. Then, the Pij is computed as an additive of its initial time and handling time.

$${C}_{ij\ell }={S}_{ij\ell }+{E}_{ij\ell }$$
(2)

where, Cijℓ, Sijℓ and Eijℓ are denoted as culmination time, initial time and handling time of the jth task of the ith job Pij on server , respectively.

  1. 2)

    The initial time of the ith job of jth task Pij on the server, is calculated based on the peak period of the earlier task μ on the server and the peak time of the earlier task.

$${S}_{ij\ell }=\max \left\{{S}_{\left(i-1\right)\mu \ell }+{E}_{\left(i-1\right)\mu \ell },{C}_{i\left(j-1\right)\left(\ell -1\right)}\right\}$$
(3)
  1. 3)

    Server closes down when the last jobs are completed its execution.

$${\Psi}_{\ell }={\Psi}_{\vartheta \ell }$$
(4)

where, Ψ denotes the shutdown time of the server , ϑ indicates the end job on sever , Ψϑℓ represents the peak time of the previous job on server .

  1. 4)

    Some set of limitations on the server for the execution of the process. Thus, the constraints are specified below.

$${\displaystyle \begin{array}{c}{C}_{\omega \ell }-{C}_{i\ell }+\textrm{Y}\left(1-{\upchi}_{i\omega \ell}\right)\ge {E}_{\omega \ell}\\ {}i,\omega =1,2,\dots, n;\ell =1,2,\dots, m\end{array}}$$
(5)

where n and m indicate the sequence of jobs and servers, respectively. Cωℓ, Eωℓ specifies the culmination time and handling time of job ω on server , Υ determines the sizeable positive integer. χiωℓ denotes the handling sequence priority of job i and job ω on server . If job i Is executed on the server then χiωℓ ∈ 1, otherwise 0.

  1. 5)

    A single server suffers from handling more than one operation simultaneously.

$${S}_{ij\ell}\ge {C}_{\omega t\ell }-{\textrm{T}}_{ij\omega t\ell }.\textrm{Y},\forall \ell \in {\textrm{Y}}_{ij}\cap {\textrm{Y}}_{\omega t}$$
(6)
$${S}_{\omega t\ell}\ge {C}_{ij\ell }-\left(1-{\textrm{T}}_{ij\omega t\ell}\right).\textrm{Y},\forall \ell \in {\textrm{Y}}_{ij}\cap {\textrm{Y}}_{\omega t}$$
(7)

where, Τijωtℓ denoted as whether the ith job of the jth task Pij is handled earlier than the tth task of the ωth job Pωt on server , if it is executed, then 1; otherwise, 0. Υij ∩ Υωt indicates the vacant server set that can able to handle the tasks Pij and Pωt simultaneously.

  1. 6)

    When the task has numerous operations on a similar job, it is essential to process through the determined order.

$${\displaystyle \begin{array}{c}{S}_{ij\ell}\ge {C}_{i\left(j-1\right)\ell },\ell \in {\textrm{Y}}_{ij}\\ {}{\forall}_i=1,2,\dots, n,{\forall}_j=2,\dots, {\varphi}_i\end{array}}$$
(8)

where, φi denotes the numeral tasks of job i.

  1. 7)

    When the server is scheduled, then the entire jobs can access at the initial time (S = 0)

$${S}_{ij\ell}\ge 0$$
(9)
$${C}_{ij\ell}\ge 0$$
(10)
  1. 8)

    For each job, one server can be chosen for its execution.

$$\sum_{\ell =1}^m{\beta}_{ij\ell }=1$$
(11)

where βijℓ represented as whether the server handles the job Pij, one deliberates the process is executing, 0 determines the process is not completed.

The objective of the JSP optimization problem incorporates max culmination time, max server load, min execution cost, etc. This work utilizes the max culmination time to ensure the proposed algorithm’s efficacy, which can be mathematically modelled as follows.

$$F=\min (Makespan)=\underset{1\le \ell \le m}{\min \mathit{\max}}\left\{\underset{1\le i\le n}{\max }{\Psi}_{\ell}\right\}$$
(12)

where Makespan denotes the max culmination time of all servers, it is the initial specification to calculate the handling efficacy. Ψ is mathematically expressed as.

$${\Psi}_{\ell }=\sum_{i=1}^n\sum_{j=1}^{\varphi_i}\sum_{\ell =1}^m\left({S}_{ij\ell }{\beta}_{ij\ell }+{E}_{ij\ell }{\beta}_{ij\ell }\ \right)$$
(13)

Proposed methodology

This section discusses the overview of generic social spider optimization algorithm, the dynamic opposite learning concept and the formulation of the proposed algorithm.

Overview of social spider optimization algorithm

SSO algorithm mimics the foraging actions of vibration sense. The source of food is another social spider or prey on the web. The direction of the food source is erudite using vibration sense. The spider denotes a feasible solution in the SSO algorithm, and the web indicates search space. SSO consists of male and female agents; the female numeral individual is more significant than the male individual. The numeral female individual (Nf) and male individuals (Nm) are calculated using

$${N}_f= Floor\left[\left(0.9-\mathit{\operatorname{rand}}.\textrm{0.25}\right). Np\right]$$
(14)
$${N}_m= Np-{N}_f$$
(15)

where rand denotes the arbitrary function within the range of [0,1]; Np denotes the numeral populations; each spider receives weight Wi in population, Φ using fitness calculation. The weight was calculated using

$${W}_i=\frac{fit\left(\Phi \right)-{worst}_{\Phi}}{best_{\Phi}-{worst}_{\Phi}}$$
(16)

The vibration sense Va, b on the web received by spider a and sent by spider b. da, b is the Euclidean distance, which can be calculated using

$${V}_{a,b}={W}_i.{e}^{-{d}_{a,b}^2}$$
(17)

The cooperative behaviour of female agents is calculated using

$${f}_i\left(t+1\right)=\left\{\begin{array}{c}{f}_i(t)+x.{V}_{i,c}.\left({\Phi}_c-{f}_i(t)\right)+y.{V}_{i,d}.\left({\Phi}_d-{f}_i(t)\right)\\ {}+z.\left(\mathit{\operatorname{rand}}-\frac{1}{2}\right){r}_l<\gamma \\ {}{f}_i(t)-x.{V}_{i,c}.\left({\Phi}_c-{f}_i(t)\right)-y.{V}_{i,d}.\left({\Phi}_d-{f}_i(t)\right)\\ {}+z.\left(\mathit{\operatorname{rand}}-\frac{1}{2}\right){r}_l\ge \gamma \end{array}\right.$$
(18)

where x, y, z, rl, rand are arbitrary values between [0,1], γ is the threshold value, t is the iteration number for nearest spider c, d. The cooperative behaviour of male agents is calculated using

$${m}_i\left(o+1\right)=\left\{\begin{array}{c}{m}_i(o)+x.{V}_{i,f}.\left({\Phi}_f-{m}_i(o)\right)++z.\left(\mathit{\operatorname{rand}}-\frac{1}{2}\right)\\ {} if\ {W}_{N_{f+i}}>{W}_{N_{f+m}}\\ {}{m}_i(o)+x.\left(\frac{\sum_{e=1}^{N_m}{m}_e(o).{W}_{N_{f+h}}}{\sum_{e=1}^{N_m}.{W}_{N_{f+h}}}-{m}_i(o)\right)\\ {} if\ {W}_{N_{f+i}}\le {W}_{N_{f+m}}\end{array}\right.$$
(19)

where, \(\frac{\sum_{e=1}^{N_m}{m}_e(o).{W}_{N_{f+h}}}{\sum_{e=1}^{N_m}.{W}_{N_{f+h}}}\) denotes the weighted mean for the male population.

figure a

The dominant male and female agents are used for mating within radius r can be calculated using

$$r=\frac{\sum_{j=1}^n\left({ub}_j^{high}-{lb}_j^{low}\right)}{2.n}$$
(20)

where \({ub}_j^{high}\) and \({lb}_j^{low}\) are the upper and lower confines of the problem. The SSO algorithm is presented in algorithm 1.

Dynamic opposite learning method

The dynamic opposite learning method (DOL) is an extension of the opposition-based learning (OBL) method that is proposed by Xu et al. [37]. However, the OBL method can improve the quality of the algorithm towards optimal concerning the exploration of search space. But, determining the opposite position from the current situation is imminent. DOL is familiarized to advance the solution quality by eliminating the local optimal struck to eradicate the issue. The DOL is moreover similar to OBL by changing the opposite number Xo with Xro (i.e., Xro =  rand . Xo, rand  ∈ [0, 1]). The stable pursuit region can be modified into an unstable pursuit region trademarking the active change concerning arbitrary number.

Assumption

Let α denoted as the lower bound of the individual, λ denotes the upper bound of the individual. For instance, Xro, α and λ consist of two cases, namely Xro ∈ [α, λ] and Xro ∉ [α, λ]. The individual Xro, Xo and X consist of three associations such as 1) Xro between Xo and X 2) Xro is larger than Xo and 3) Xro is minimum than X.

The dynamic opposite number Xdo can be regenerated when Xdo ∉ [α, λ]. At the same time, if Xro is exceed the upper and lower bound due to the below Eq. (21).

$${X}_{do}=\left\{\begin{array}{cc}X+{r}_1.\left({X}_{ro}-X\right)& if\ {X}_{ro}>\lambda \\ {}X+{r}_2.\left({X}_{ro}-X\right)& if\ {X}_{ro}<\alpha \end{array}\right\}$$
(21)

The mathematical model of DOL concerning the dynamic opposite number and the active opposite point is expressed as follows.

Dynamic opposite value

It is considered to determine the step-by-step process of DOL. The mathematical model of Xdo can be expressed in Eq. (22).

$${X}_{do}=X+z.{r}_1.\left({r}_2.{X}_o-X\right)$$
(22)
$${X}_o=\lambda +\alpha -X$$
(23)

where, r1 and r2 denoted as arbitrary values with the bound of [0,1]; z indicates the learning weight value; X represented as the actual number with the range of [α, λ].

Dynamic opposite point: The expansion of positive contradictory value in the various dimensional search point. The mathematical model of the active opposite end can be expressed in Eq. (24).

$${X}_{do,j}={X}_j+z.{r}_3.\left({r}_4.{X}_{o,j}-{X}_j\right),j=1,2,\dots, D$$
(24)
$${X}_{o,j}={\lambda}_j+{\alpha}_j-{X}_j$$
(25)

where, D determines the D-dimensional space that incorporates various possible individuals; X specifies the present individual with D-dimensional space (i.e., X = {X1, X2, …, XD} are limited to upper (i.e., λ = {λ1, λ2, …, λD} and lower α = {α1, α2, …, αD} bound; r3 and r4 are indicates the arbitrary values with the range of [0,1].

The DOL method is incorporated into the beginning and rehearsal of the SSO population. In the initialization process, the opposite population Φdo is generated by the population Φ (i.e., Φdo ∪ Φ). Then, the fitness of the population Φdo ∪ Φ is computed and picks the half set of individuals to the population ΦS. In the generation process, if the hopping process gratifies, it produces the Φdo by ΦS and process the whole population as Φdo ∪ ΦS, then compute the fitness of Φdo ∪ ΦS and repeat the process until the process completes. If the solution of Φdo, ΦS and Φ exceeds the boundary, then regenerate the individual within the upper and lower limit.

DOL-based SSO algorithm

In this part, we proposed a novel variant of SSO, namely DOLSSO, to enhance the standard SSO algorithm to handle the precocity that constantly riddles several optimization techniques.

Population initialization based on DOL

The opposite solutions based on DOL are generated from the set of the first half of present individuals. The DOL population is processed with the SSO population initialization method. The mathematical model is expressed as follows.

$${\Phi}_{ij}^{do}={\Phi}_{ij}+{r}_{1,i}.\left({r}_{2,i}.\left({\lambda}_j+{\alpha}_j-{\Phi}_{ij}\right)-{\Phi}_{ij}\right)$$
(26)

where, Φij indicates the jth dimension of ith the solution in population created by SSO, r1, i and r2, i are two arbitrary values within the range of [0,1], λj and αj specifies the upper and lower boundary of jth dimension of ith solution \({\Phi}_{ij}^{do}\), determines the opposite individual of Φij. The DOL population initialization \({\Phi}_{ij}^{do}\) should satisfy the boundary region as per Eq. (27).

$${\Phi}_{ij}^{do}= RN, if\ {\Phi}_{ij}^{do}<{\alpha}_j\Big\Vert {\Phi}_{ij}^{do}>{\lambda}_j$$
(27)

where RN indicates the arbitrary values within the limit of [αj, λj].

Generation hopping based on DOL

DOL iteration hopping is analogous to the DOL population initialization process. This DOL iteration hopping strategy can be processed into whole generation to aid the algorithm to eradicate from local optimal struck. The hopping rate of the DOL method is represented by δ ∈ [0, 1] to determine the probability of the technique according to the hop rate. If the arbitrary value produced is minimal than the hopping rate factor δ, DOL process the hopping action. The mathematical model of the hopping process is expressed as follows.

$${\Phi}_{ij}^{do}={\Phi}_{ij}+z.\kern0.5em {r}_{3,i}.\left({r}_{4,i}.\left({\lambda}_j+{\alpha}_j-{\Phi}_{ij}\right)-{\Phi}_{ij}\right)$$
(28)

where z denotes the learning weight that differs concerning various scenarios and conditions, r3, i and r4, i are the two arbitrary values within the range of [0,1].

DOLSSO algorithm

As discussed earlier, DOLSSO is the modified version of SSO that includes the DOL method in standard SSO. The SSO algorithm is given in algorithm 1, and the DOL method is discussed in Dynamic opposite learning method section. The procedure of DOLSSO is presented in algorithm 2, and the workflow of DOLSSO is sketched in Fig. 2.

Fig. 2
figure 2

Flowchart of DOLSSO algorithm

figure b

Decoding and encoding for JSP

A wide variety of strategies are available to encode and decode the JSP. Gao et al. [38] introduced one of the most popular strategies, including server selection and process ordering vectors. Although the strategy provides better results for some scenarios, its representation of individuals increases memory utilisation. In this work, we utilized the encoding and decoding strategy from the reference [37].

Exploration and exploitation analysis

We utilized the hopping rate factor (δ) in the proposed system to trade off the exploration and exploitation process. The individual that satisfies the hopping element then the current individual will undergo the exploration process using the DOL method. Otherwise, the individual utilizes the SSO method to exploit the search space. Therefore, we use δ with the fixed values of 0.5 to determine the solution update process. The DOLSSO algorithm initiates with a set of random solutions. At each generation, search individuals update their positions concerning randomly selected search agents, or the best individual found so far. Depending on the hopping factor δ, the proposed algorithm can switch between the exploration and exploitation processes. Finally, the DOLSSO is terminated by the fulfilment of a stop criterion.

Experimental analysis

The experimentation setup and evaluation of results are performed to ensure the effectiveness of the projected system. Later, the obtained outcome of the introduced model is compared with five state-of-art existing metaheuristic algorithms. We have utilized the FogSim simulator in this scheduling to generate the dataset. An energy-efficient open-source tool is used for modelling and simulating resource management in fog/edge computing. The FogSim is integrated with CloudSim to deal with the actions between fog computing environments. In CloudSim, different parameters, like data centres, use communication processes for transmission.

Parameter settings

The performance of the SSO algorithm on solving job scheduling is coded using FogSim with CloudSim under Windows 10 on an Intel i5 processor with 3.4GHz and 16GB RAM. The empirical result is compared with other state-of-art metaheuristic algorithms, namely Social Spider Optimization (SSO) [22], grasshopper optimization algorithm (GOA) [39], Salp swarm algorithm (SSA) [40], Grey Wolf Optimization (GWO) [41], and Whale Optimization Algorithm (WOA) [42]. For all experimentation algorithms, the population size, maximum iterations and number of runs are fixed as 60, 100 and 20, respectively. The simulation parameters utilized for this experimentation is illustrated in Table 1.

Table 1 Simulation parameters for experimentation

Result analysis

The dataset is generated with the aid of a FogSim-based simulation tool. All the proposed and compared algorithms are iterated for 100 epochs for each test case with varying tasks, and obtained results are graphically plotted. Further, we experimented on two test cases concerning the number of processors and fog nodes. For the first case, the number of processors varies from 4, 8, 12, 16, 20, and 24 with 20, 40, 80, 160, 320 and 400 tasks, respectively. Each task is allocated to an adequate number of fog servers, and processing orders are determined by solution representation as specified in DOL-based SSO algorithm section. For the second case, the number of fog nodes varies from 5, 10, 15 and 20 with respect to different jobs.

Case 1: experimentation based on the number of processors

In the first case, we have created varying numbers of processors with various tasks. The experimentation results are measured, and the introduced model is contrasted with five metaheuristic algorithms: GOA, SSA, GWO, WOA and SSO. Figure 3 determines the average resource utilization concerning the number of processors. Figure 3 shows that the proposed DOL-SSP algorithm provides better than GOA, SSA, GWO, WOA and SSO. The standard SSA algorithm competes with the proposed algorithm but fails during iterations. At the same time, the proposed DOL-SSO algorithm utilizes the processors effectively by allocating adequate tasks to the available machines. GWO and WOA algorithm provides moreover similar results in resource utilization. Overall, average resource utilization by DOLSSO was improved on average by 12% more than the standard SSA algorithm.

Fig. 3
figure 3

Resource Consumption concerning the number of CPUs

The average energy depletion ratio concerning the number of processors is represented in Fig. 4. In this experimentation, achieving a lower energy consumption ratio specifies the algorithm hoards the energy and provides better performance. Based on Fig. 4, we noticed that the introduced model gives significant outcomes to all numbers of processors except eight processors compared to other algorithms. Moreover, GWO and SSA provide similar results and attain more energy consumption ratio, which offers less performance than the proposed method. The standard SSO algorithm and the proposed algorithm achieve the same results due to the random initialization of populations. Also, several ways consume more energy than the proposed algorithm. Though the proposed algorithm attains the same output in 8 processors, the resources are more effectively utilized than the standard SSO algorithm.

Fig. 4
figure 4

Average Energy Consumption Ratio concerning the number of CPUs

Case 2: experimentation based on the number of fog nodes

The efficacy of the proposed model on fog computing scenarios is validated by varying fog nodes from 5 to 20 with a varying number of jobs. CPU clock rate and allocated memory for heterogeneous nodes are taken from [43]. We have considered four cases of fog nodes; the maximum number of iterations is 8000 in all test cases with 20 runs. The introduced model is contrasted with five existing metaheuristic algorithms like GOA, SSA, GWO, WOA and SSO. For evaluation, two performance metrics are utilized: execution time and allocated memory for jobs with respect to completion time. Once the server is distributed with a specific number of jobs, it is locked until it completes its execution. The experimentation of execution time for various jobs concerning fog nodes is observed in Table 2. The execution time for jobs is illustrated in Fig. 5. The table and Fig. 5 clearly show that the introduced model provides improved results concerning minimum execution time than the other compared algorithms.

Table 2 Execution time for Jobs
Fig. 5
figure 5

Analysis and comparison of Execution time to process job

The allocated memory for jobs concerning the number of fog nodes is observed in Table 2. The pictorial representation of allocated memory for jobs is illustrated in Fig. 6. As observed from the results, Table 3 and Fig. 6 specifies that the concert of the DOLSSO method outperforms well than compared algorithms. The maximum allocated memory concerning 20 fog nodes for various determined jobs attained by DOLSSO (2.6 GB) is lesser than the standard SSO (2.9 GB). In addition, allocated memory for jobs achieved by GWO (2.8 GB) and SSO (2.9 GB) are closer. Based on the experimentation results, incorporating the DOL strategy into SSO improves performance by eradicating local optimal struck and a better convergence rate.

Fig. 6
figure 6

Analysis and comparison of Allocated memory for the job

Table 3 Allocated memory for jobs

Conclusion

In the last few years, fog computing has given great attention to researchers, industrialists and the community due to its computational services. We addressed the job scheduling issue in the fog computing setting with reduced CPU time utilization and memory usage. This work introduces a novel version of the SSO method by incorporating the dynamic opposition learning (DOL) approach, namely the DOLSSO algorithm. The proposed model enriches the solution quality by eradicating the local optimal struck and boosting the convergence rate towards the optimal solution. The experimentation is carried out in the FogSim simulation tool with two different scenarios. The first scenario pacts with several processors concerning the number of tasks, and the second test case deals with the number of fog nodes concerning the number of jobs. The proposed infrastructure guarantees the execution of the data request and satisfies mobile users effectively using the DOLSSO algorithm. The empirical result shows the algorithm’s effectiveness in obtaining an optimal schedule in a Fog computing environment. The results show that the proposed method achieves ~ 10% - 15% better CPU utilization and ~ 5%-10% less energy consumption than other algorithms. Further, this work can be extended to handle the multi-objective flexible job scheduling issue by incorporating self-adaptive parameters into the DOLSSO algorithm.