Introduction

The rapid growth in digital technology has brought a fast transition in almost every aspect of life. Real-time data access, automation, and device-to-device connectivity bring a rapid change in current industrial practices. In network science, scientists strive to bring intelligence closer to the source [1]. Therefore, the concept of fog computing has been proposed in recent trends. A conventional fog computing framework generally includes three layers; the lower level is the IoT device layer, where data is being generated. The next layer comprises fog brokers that deliver computation, storage, networking services, and the last cloud tier. The recent adoption of IoT is in the industrial environment, sometimes referred to as Industrial IoT (IIoT). It is the concept of effectively using the devices in such a way that they combine to give better yield [2]. In an industrial environment, resource utilization is a very common problem [3] where the few devices are overburdened compared to others. In the traditional industrial environment, most devices are static or move within a range. These devices stay in a predefined area; however, there are devices, such as robots and automobiles, that move freely inside the entire workshop.

The primary motivation behind this work is the technology adoption in IIoT, to facilitate delay-sensitive computation systems to manage the quality of services. Unfortunately, using the traditional cloud to support autonomous devices is not viable for many reasons, such as security, communication delay, and cloud data center scheduling policies. Also, the efficient resource utilization is another motive of this research. In IIoT, many resources get wasted due to poor resource management strategies.

Thus, the inception of highspeed wireless data rate and Multi-access Edge Computing (MEC) technologies impersonates a baseline for more advanced development for the IIoT environment [4]. A massive amount of data can be transferred to edge servers to train machine learning models, perform complex computations, and temporarily cache the data [5,6,7,8].

Furthermore, in existing work, fog nodes are placed at fixed locations to support the delay-sensitive applications [9, 10] or a federated approach is used for balancing workload [11,12,13]. Thus, fixed location deployment of a fog node can cause a single point of failure, and such solutions are not scalable.

In an IIoT environment, most of the resources always remain underutilized [14]; therefore, predicting the location of mobile nodes is very important to ensure the proper utilization of under-utilized resources [15]. Thus, the scalable approach can help improve the entire factory process, including assembly logistics and supply chain. Fog computing has emerged as the latest technology where edge and fog servers can help bring the computing infrastructure close to the IIoT devices and improve the Industry 4.0 architecture design [16].

Contribution – In this work, we removed the limitation of FogNetSim [17] where only selected devices can act as broker nodes to schedule the incoming computing requests. In FogNetSim, a framework followed the standard definition of fog computing where a selected number of devices located at the edge of the network act as fog nodes. However, such a system fails to perform well with many IoT devices. On the contrary, the current work focuses on scalability issues where every IoT device in the network can act as a fog node depending on the availability and certain constraints like energy level, computation power, etc. Also, the previous work focuses on the parallel placement of fog nodes at a second-tier, whereas this work proposed a hierarchical placement scheme. In the proposed multi-layer fog framework for IIoT, every device can work as an autonomous fog broker. Thus, the scalability issue raised in conventional fog networks is addressed with distributed systems. Here, we proposed the concept of multi-functional devices as fog nodes. Other salient features of the work are:

  • Propose a multi-layer fog deployment framework for task scheduling and big data processing in an industrial environment.

  • A probabilistic model is adopted to improve the efficiency of heretical placement of fog servers over traditional flat conventional placement in terms of computation and communication delay.

  • A priority queuing technique is used to schedule the IIoT data and tasks in a multi-layer fog network.

  • In the proposed scheme, the IoT device layer act as a fog device to perform complex computation tasks to minimize the communication overhead.

  • A multi-layer scheduling algorithm that uses all layers to schedule incoming tasks.

  • The localization module is the proposed work that enables the fog brokers to predict the location of mobile nodes.

  • The proposed simulation framework is evaluated in terms of memory, CPU, communication delays, computation delays, and energy. It is further compared with other existing solutions regarding workload acceptance/completion ratio.

The rest of the paper is structured as follows: Section 2 covers the state of the art literature review about existing fog based IIoT frameworks; Section 3 cover the system model; system components are discussed in Section 4; results are discussed in Section 5.

Literature review

In this section, a state-of-the-art literature review is conducted to cover the existing fog solutions for IIoT, including the location-aware schemes to determine the location of moving nodes in an IIoT environment.

Fog computing in an industrial IoT – Kumar et al. [18] proposed a fog framework for IIoT networks that combines the technologies like blockchain and edge computing to solve the current IIoT problems like latency, task computation, and security. First, however, they evaluated the efficiency of the proposed framework in terms of network usage, power consumption, and latency with the simulation of a non-blockchain environment.

Chen et al. [19] proposed a Kronecker-supported fog-based optimized compression scheme for IIoT data that achieves better results. The proposed scheme first uses a k-means clustering algorithm to calculate the spatial correlation among IoT data to obtain better compression results with a low communication overhead. Then, the two-dimensional Kronecker-supported data compression mechanism at the fog node recovers data back to its original shape with high precision and accuracy. The communication overhead between fog and cloud is also minimized with this mechanism. Finally, an efficient algorithm is proposed to evaluate the simulation framework. The results show that the proposed scheme is energy efficient with good quality of service.

Chekired et al. [20] proposed a hierarchical technique for fog server placements for the IIoT. The requests are classified into two categories; high priority and low priority. The high-priority requests demand an urgent response. A workload scheduling algorithm is also proposed to offload requests to fog servers in different hierarchy tiers. The solution is evaluated using actual industrial data from the Bosch group and comparing the proposed solution with conventional strategies.

Liu et al. [21] proposed a multilevel indexing model for service discovery in the fog layer of IIoT. Service discovery is crucial because an efficient service discovery system can efficiently accommodate user requests. The proposed model is based on equivalence relation and named as “distributed multilevel (DM)-index model” for fog layer retrieval and service maintenance in IIoT to reduce redundancy and minimize retrieval and traverse time. It also narrows down the search space. The model is evaluated experimentally and theoretically and shows its effectiveness compared to the inverted index and sequence models.

Mubeen et al. [22] developed a prototype to offload controller tasks to fog or cloud in industrial control systems. Many experiments are performed to instigate the interplay of fog computing, cloud computing, and the IIoT. A mitigation mechanism is also applied to reduce network delay when controllers are offloaded to cloud or fog infrastructures.

Chen et al. [23] proposed an energy-efficient offloading for IIoT in fog network environments. The purpose is to minimize the energy consumption while offloading dynamic computation requests to the fog layer. The energy minimization computation offloading problem is formulated with energy, delay, and other network parameters. However, an algorithm is proposed to address the optimization problem with joint optimization of offloading ratio and transmission time. The dynamic voltage scaling technique is also used with the above solution to reduce energy consumption during offloading. The proposed solution jointly optimized transmission time, local CPU, transmission power, and offloading ratio.

Yu et al. [24] proposed a secure data deletion technique in industrial fog environments. However, this area is relatively less explored. This research proposes a framework where the IoT devices, fog, and cloud combine to form an industrial environment. Further, better control of the data is proposed in the fog-cloud architecture of IIoT. The proposed protocol takes advantage of the feature of attribute-based encryption. The theoretical evaluation shows good performance with the proposed protocol.

Mukherjee et al. [25] formulated the problem of task offloading in fog computing for the IIoT. In IIoT, the latency-driven applications are very challenging because, most of the time, IoT sensors work in an automated environment, and the control signal with minimum latency is necessary for such environments. The proposed strategy is evaluated through simulations, and the results show that the proposed strategy is effective and scalable.

Fu et al. [26] proposed a data storage and search scheme for IIoT utilizing fog-cloud technologies. However, the IIoT devices are placed in isolated and remote areas and hence are vulnerable. In this proposed technique, the data is first processed at edge servers, delay-sensitive data is stored locally in edge servers, and then data is processed in cloud servers and stored. The simulation results proved the efficiency of the proposed scheme.

Aazam et al. [27] proposed a fog-based framework that uses industry 4.0 where many IoT devices, machines, business processes, appliances, and personals interact with each other and generate a massive amount of data. To process this data in a time-sensitive manner, they proposed a fog-based solution where a middle layer called fog communicate with all devices and process data at the edge of the industry. Many use cases are also presented, and research changes are discussed.

Lin et al. [28] proposed a cost-efficient strategy for fog servers deployment in Industry 4.0 at logistic centers. The work investigates the placement of fog servers, gateways, edge servers, sensors, and clouds in Industry 4.0 to minimize the deployment cost. This NP-hard problem of facility location is also solved with a metaheuristic algorithm that uses a genetic algorithm to enhance computational efficiency and a discrete monkey algorithm to find quality solutions.

To summarize, the existing fog solutions focus on the horizontal placement of the resources that can lead to high delay when the tasks are large and split into multiple smaller tasks. Also, they do not focus on the energy, mobility, and IIoT devices aspects of the system.

Localization – In this section, we cover the literature review to explore the existing techniques used to find the location of mobile nodes in a fog computing platform. In a fog-cloud integrated environment, static and mobile IoT devices generate data to be processed at fog nodes or cloud; therefore, location is an essential decision parameter.

Chen et al. [29] proposed a weighted factors localization algorithm for fog computing. The proposed solution includes both specific and general localization. The evaluation is performed by comparing the proposed algorithm with the other two algorithms in a simulation environment, and positive results are observed.

Guidara et al. [30] investigated the localization of mobile nodes in an indoor environment. The wireless nodes are placed inside a building or a premise, and these IoT devices collect data. The data is sent to a central processing node called fog, where the position of the unknown node is estimated. The proposed algorithm finds the location of a node with the shortest delay and without incorporating powerful processing nodes.

FogLight [31] is a localization solution for IoT devices that depends on visible light and relies on spatial encoding. Spatial encoding is produced when mechanical mirrors are flipped based on binary images inside a projector. It employs simple light sensors that can be used with gas meters, light switches, or thermometers with a discoverable location. In the proposed solution, the sensors units can perform localization with high accuracy and with minimum processing overhead and computation efficiency, finding the location of any low-power IoT device. The results show that FogLight finds a device’s location with an accuracy of 1.8 mm.

Femminella et al. [32] proposed a distributed signaling protocol for service function localization. The different functions of the protocol include the peer discovery in the transport layer and signaling distribution which are then divided into two parts of signaling delivery, downstream and reverse path. Finally, the protocol is evaluated via natural experiments.

Bhargava et al. [33] Proposed a fog-based localization solution for ambient assistant personnel. The proposed system is a low-cost wsm-based wearable device and cloud gateway to find ambient assisted living locations. Using the given topology information, the distance covered by a device is calculated with direction values. The proposed algorithm is evaluated in both indoor and outdoor environments.

To conclude this discussion, many existing solutions ignore important factors like device-to-device communication, mobility, energy consideration, fog federation, and fog placement that can improve the performance of an IIoT-based simulation framework. The work proposed in [34,35,36,37] lacks mobility support, and allows only horizontal fog placement. Whereas, none of [17, 18, 34,35,36,37,38,39,40,41,42] support fog federation and IIoT devices as fog devices. A detailed comparison of these features is given in Table 1. The proposed work focuses on fog placement strategy in both directions (distributed and hierarchical), uses some IIoT devices as fog devices that meet certain criteria, finds the location of unknown mobile IIoT devices to improve reliability, and incorporate distributed fog locations that can work in a federated architecture to provide resources on demand.

Table 1 Fog Simulators Comparison

System model

The abstract view of system architecture is shown in Fig. 1 where the IIoT system is divided into multiple layers. The first layer comprises \(U_n\) number of IoT sensors, static, and mobile devices. These devices are categorized into two classes, user devices that generate data and request computations and computationally strong devices that also volunteer their resources to act as fog nodes. The second layer comprises fog devices that are classified into n number of fog devices placed in a hierarchical manner that receive computation requests from the lower layer and hierarchically execute tasks. The topmost layer is the cloud layer that performs complex executions. The module interactions of the proposed framework is explained in Fig. 2. There is \(L_m\) number of fog locations that are connected to a cloud data center. These fog nodes share the distributed workload in a federated way. The summary of notations is given in Table 2.

Fig. 1
figure 1

System architecture – IIoT sensors and devices offload tasks via a wireless link between devices and fog nodes. The fog nodes are placed in a hierarchical model classified into multiple levels. Level-1 fog nodes are the intelligent nodes that decide whether to offload tasks to volunteer devices, same-level fog, higher-level fog, or a nearby fog location

Fig. 2
figure 2

Architecture of the proposed framework illustrating module interactions

Table 2 Summary of Notations

The M/M/c queuing model is employed [44] where the system has multiple servers that contribute to executing tasks in the queue. Using the Poisson process, the arrival rate \(\lambda\) is calculated as in Eq. 1

$$\begin{aligned} \lambda _T = \sum \limits _{F_k \in F_n}^n \lambda _{F_k}. \end{aligned}$$
(1)

Where \(\lambda _{F_k}\) is the average arrival rate at \(k_{th}\) fog node F.

Execution delay – In the proposed system, there are multiple options to execute that task, and the system chooses the best available option. Initially, the priority is to execute the task on a local static/mobile device. However, the task is offloaded to lower-level fog nodes due to limited resources. This level of fog nodes is composed of volunteer devices willing to share their resources. However, if all the fog devices are busy and the system fails to meet the required deadline, the task is further offloaded to the second-tier fog nodes. Similarly, if the task has a delayed deadline, it is offloaded to the cloud otherwise. In case of task can be sub-divided into multiple independent sub-tasks where part of it can be executed locally and other to fog device; the execution delay \(\delta\) is calculated as:

$$\begin{aligned} \delta =\max \lbrace \delta _{\text {L}},\delta _{\text {O}}\rbrace . \end{aligned}$$
(2)

Where \(\delta _{\text {L}}\) and \(\delta _{\text {O}}\) are the execution delay for locally executed, and remotely executed tasks respectively. These delays are calculated separately as follows:

  1. 1.

    Local execution delay – The local execution delay is computed based on the local tasks available in the device queue. In such cases, there is no transmission delay; thus, local execution delay is computed as:

    $$\begin{aligned} \delta _{\text {L}}=\frac{\sum \nolimits _{k\in \mathcal {K}}\sigma _{k}^{\text {L}}\upsilon _{k}}{\Upsilon _{\text {L}}}, \end{aligned}$$
    (3)

    where \(\Upsilon _{\text {L}}\) is the execution rate at a local device in terms of millions of instructions per second (MIPS), and \(\mathcal {K}\) is tasks in the local queue.

  2. 2.

    Remote execution delay – The remote execution delay is computed based on the offloaded tasks. Further, the brokers share the workload with other fog nodes in the hierarchy and cloud servers. According to [25], additional delays can be observed like wireless uplink delay, wireless downlink delay, and network delay.

    $$\begin{aligned} \delta _{O}=\lbrace \delta _{FU} + \delta _{CU}+\max \lbrace \delta _{FP},\delta _{fU}+\delta _{CP} + \delta _{fD}\rbrace +\delta _{FD}+ \delta _{CD} \rbrace . \end{aligned}$$
    (4)

where \(\delta _{\text {CU}}\) and \(\delta _{\text {FU}}\) are wireless uplink transmission latency of tasks processing in cloud and fog respectively. Correspondingly, \(\delta _{\text {CD}}\) and \(\delta _{\text {FD}}\) are their respective wireless downlink transmission latency. Furthermore, \(\delta _{\text {CP}}\) and \(\delta _{\text {FP}}\) are the processing latency of fog and cloud. Moreover, \(\delta _{\text {fD}}\) and \(\delta _{\text {fU}}\) are the downlink and uplink fronthaul latency of tasks processing in cloud.

According to [45] the wireless uplink transmission latency for cloud \(\delta _{\text {CU}}\) and fog \(\delta _{\text {FU}}\) are calculated as follows:

$$\begin{aligned} \delta _{\text {FU}} = \frac{\sum \nolimits _{k\in \mathcal {K}}\sigma _{k}^{\text {F}}B_{k}^{\text {in}}}{\beta _{\text {U}}},\delta _{\text {CU}}=\frac{\sum \nolimits _{k\in \mathcal {K}}\sigma _{k}^{\text {C}}B_{k}^{\text {in}}}{\beta _{\text {U}}}, \end{aligned}$$
(5)

Similarly, the wireless downlink transmission latency for cloud and fog are calculated as follows:

$$\begin{aligned} \delta _{\text {FD}} = \frac{\sum \nolimits _{k\in \mathcal {K}}\sigma _{k}^{\text {F}}B_{k}^{\text {out}}}{\beta _{\text {D}}},T_{\text {CD}}=\frac{\sum \nolimits _{k\in \mathcal {K}}\sigma {k}^{\text {C}}B_{k}^{\text {out}}}{\beta _{\text {D}}}. \end{aligned}$$
(6)

The \({\beta _{\text {D}}}\) and \({\beta _{\text {U}}}\) are the wireless transmission rates [46] for uplink and downlink that can be calculated as follows:

$$\begin{aligned} \beta _{U}= & {} \Upsilon _{U}\log _{2}\left( 1+\frac{\rho _{U}\iota _{U}^{2}}{\Lambda _{0}}\right) ,\nonumber \\ \beta _{D}= & {} \Upsilon _{D}\log _{2}\left( 1+\frac{\rho _{F}\iota _{D}^{2}}{\Lambda _{0}}\right) \end{aligned}$$
(7)

Where \(\rho _{\text {U}}\) and \(\rho _{\text {F}}\) are the transmission power of mobile devices and fog devices, respectively. The \(\iota _{\text {U}}\) and \(\iota _{\text {D}}\) are the wireless gain for uplink and downlink, which is constant and does not change with time. Moreover, \(\Upsilon _{\text {U}}\) and \(\Upsilon _{\text {D}}\) are the bandwidths of uplink and downlink. Finally, the noise power is \(\Lambda _{0}\). When the computation speed (MIPS) at cloud and fog is defined as \({\zeta _{\text {C}}}\) and \({\zeta _{\text {F}}}\), the cloud processing latency and fog processing latency can be calculated by [47].

$$\begin{aligned} \delta _{\text {FP}} = \frac{\sum \nolimits _{k\in \mathcal {K}}\sigma _{k}^{\text {F}}D_{k}}{\zeta _{\text {F}}},\delta _{\text {CP}}=\frac{\sum \nolimits _{k\in \mathcal {K}}\sigma _{k}^{\text {C}}D_{k}}{\zeta _{\text {C}}}. \end{aligned}$$
(8)

Finally, with \({\xi _{\text {F}}}\) as the fronthaul capacity [48], the downlink and uplink fronthaul transmission latency tasks processed in the cloud are calculated as:

$$\begin{aligned} \delta _{\text {fU}} = \frac{\sum \nolimits _{k\in \mathcal {K}}\sigma _{k}^{\text {C}}B_{k}^{\text {in}}}{\xi _{\text {F}}},\delta _{\text {fD}}=\frac{\sum \nolimits _{k\in \mathcal {K}}\sigma _{k}^{\text {C}}B_{k}^{\text {out}}}{\xi _{\text {F}}}. \end{aligned}$$
(9)

Hence, the total application latency will become

$$\begin{aligned} {\delta =\max \lbrace \delta _{\text {L}},\delta _{\text {F}},\delta _{\text {C}}\rbrace ,} \end{aligned}$$
(10)

and \(\delta _{\text {F}}\), the fog execution latency, and \(\delta _{\text {C}}\), the cloud execution latency are given as:

$$\begin{aligned} \delta _{\text {F}}= & {} \delta _{\text {fU}}+\delta _{\text {CU}}+\delta _{\text {FP}}+\delta _{\text {FD}}+T_{\text {CD}}\nonumber \\ \delta _{\text {C}}= & {} \delta _{\text {FU}}+\delta _{\text {CU}}+\delta _{\text {fU}}+\delta _{\text {CP}}+\delta _{\text {fD}}+\delta _{\text {FD}}+\delta _{\text {CD}}. \end{aligned}$$
(11)

Probabilistic model – A probabilistic model is presented in this section to evaluate the probability that the servers located at the lower level in the hierarchy can execute and schedule the received workload to the higher level. The \(\sigma _1,\sigma _2,....\sigma _n\) are random and independent distributed variables. The probability that level-1 servers can successfully serve and offload the assigned workload is given as [49]: \(\mathbb {P}\left( {{\sigma _i}^1 \le \ {Cap_i}^1\ } \right)\) which is calculated as shown below:

$$\begin{aligned} \mathbb {P}\left( {{\sigma _1}^1 \le \ {\zeta _1}^1, \ldots ,{\sigma _s}^1 \le \ {\zeta _s}^1\ } \right) = \prod \limits _{i = 1}^{s^1} \mathbb {P} \left( {{\sigma _i}^1 \le \ {\zeta _i}^1} \right) . \end{aligned}$$
(12)

Where \(\sigma _{s}^1\) is the workload received and \({\zeta _s}^1\) is the capacity of \(s_{th}\) server in level-1. According to the capacity of each server in level-1, there are two scenarios.

  1. 1.

    If \({\sigma _i}^1 \le \ {\zeta _i}^1\), which means the workload received at \(i_{th}\) server in level-1 is less than the computation capacity of that server, the offloaded volume of the workload is null.

  2. 2.

    If \({\sigma _i}^1 > \ {\zeta _i}^1\), which means the amount of workload is greater than the capacity of the server, the amount of workload that the server will offload to the level-2 server is:

    $$\begin{aligned} {Off}_i = \ {\sigma _i}^1 - {\zeta _i}^1. \end{aligned}$$
    (13)

Hence the Cumulative Distribution Function (CDF) of the workload is given as:

$$\begin{aligned} {\mathbb {F}_{{O_i}^1}}\left( {{\text {Cap}_i}^1} \right)= & {} \mathbb {P}\left( {{O_i}^1 \le \ {\text {Cap}_i}^1} \right) \nonumber \\= & {} \left\{ \begin{array}{ll} \mathbb {P}\left( {{\sigma _i}^1 \le {\text {Cap}_i}^1 + {\zeta _i}^1} \right) &{} {\text {if}\ \text {Cap}_{i} \ge 0}\\ 0 &{} \text {otherwise} \end{array} \right. . \end{aligned}$$
(14)

The total workload offloaded by the level-1 servers to level-2 servers is calculated as

$$\begin{aligned} \sigma _{total} = \sum \limits _{i = 1}^{s^1} {{Off}_i}. \end{aligned}$$
(15)

Location estimation – The mobile nodes follow different mobility models and can be localized in the physical area. Therefore, the location of an unknown mobile node can be estimated according to the location of some known nodes, and these known nodes might be static nodes or nodes whose locations are already estimated. Let’s say that the coordinates of an unknown node u are \((x_u,y_u)\). The coordinates of a known node k are \(x_k,y_k\), and there are n nodes with known coordinates. The location of this unknown node is calculated as [50]:

$$\begin{aligned} (x_{u}-x_{k})^{2}+(y_{u}-y_{k})^{2}=d_{ku}^{2} \end{aligned}$$
(16)

where \(d_{uk}\) is the distance between unknown node u and \(k_{th}\) known node.

fog n number of known nodes

$$\begin{aligned} \left\{ \begin{array}{l} -2x_{1}x_{k}-2y_{1}y_{k}+x_{k}^{2}+y_{k}^{2}=d_{k1}^{2}-x_{1}^{2}-y_{1}^{2}\\ -2x_{2}x_{k}-2y_{2}y_{k}+x_{k}^{2}+y_{k}^{2}=d_{k2}^{2}-x_{2}^{2}-y_{2}^{2}\\ \cdots \cdots \cdots \cdots \\ -2x_{n}x_{k}-2y_{n}y_{k}+x_{k}^{2}+y_{k}^{2}=d_{kn}^{2}-x_{n}^{2}-y_{n}^{2} \end{array}\right. \end{aligned}$$
(17)

Suppose \(Q_u=x^2_u+y^2_u\), \(R_k=x^2_k+y^2_k\), \(S=[x_k, y_k, R]^T\),

$$\begin{aligned} X= & {} \left( \begin{array}{ccc} -2x_{1} &{} -2y_{1} &{} 1\\ -2x_{2} &{} -2y_{2} &{} 1\\ \vdots &{} \vdots &{} \vdots \\ -2x_{n} &{} -2y_{n} &{} 1 \end{array}\right) ,\\ Y= & {} \left( \begin{array}{c} d_{_k1}^{2}-Q_{1}\\ d_{k2}^{2}-Q_{2}\\ \vdots \\ d_{kn}^{2}-Q_{n} \end{array}\right) , \end{aligned}$$

In matrix expression:

$$\begin{aligned} XC=Y \end{aligned}$$
(18)

and

$$\begin{aligned} C=(X^{T}X)^{-1}XY \end{aligned}$$
(19)

Finally, the position of unknown mobile node k in 2-dimensional plane is given as:

$$\begin{aligned} (x_{k},\ y_{k})=(C(1),\ C(2)) \end{aligned}$$
(20)

Proposed framework

The proposed simulation framework provides an infrastructure where mobile and static nodes can become part of the simulation. The device can request resources from the other devices that have unused resources. Let us assume there is a n number of user devices that seek resources from m dedicated fog devices, and r represents the number of fixed brokers that receive these requests. The broker nodes find the most suitable fog device to offload the incoming request. Considering the device-to-device communication, fog devices send the result directly to the user device to avoid extra delay. The proposed framework allows end devices to volunteer resources and acts as fog nodes. Thus, the user node in the idle state offers its resources to be used for the other devices. This scheme is adopted to resolve the problem of resource under-utilization in an industrial environment. This dynamic transition from user node to fog node depends on several factors: resource availability, mobility, speed, acceleration, energy, and other contextual information.

Design components – Multiple layers characterize the architecture of the proposed framework. The core components of the proposed framework are discussed below.

IIoT device layer – The bottom-most layer is the device layer, also termed as IIoT-layer. It contains all devices available in an industrial environment, such as sensors, cyber-physical systems, robots, and automobiles. These devices have limited resources and communication range; some are placed at fixed locations, whereas others can freely move within the environment.

IIoT resource layer– In the proposed work, we have introduced this layer to provide cost-effective resource sharing. This is a virtual layer containing all IIoT devices that volunteer their resources. Thus, the IIoT device becomes part of the resource layer on accepting the resource sharing model. These IIoT devices share their idle resources to enhance system performance. As more devices become a part of this layer, it improves the quality of service and reduces the impact of the communication network.

IIoT fog broker – This layer is composed of dedicated fog servers/nodes placed hierarchically in multiple sub-layers to facilitate complex tasks. These servers/nodes have significantly high computation and storage resources and are more suitable for intelligence training models. These devices can also act federated to reduce the learning curve significantly.

Cloud-layer: This layer is composed of the cloud data center for batch processing, and storage of data for a longer duration.

In the proposed framework, the quality of service is maintained through the volunteer nodes. Both static and mobile nodes volunteer their unused resources to their nearby broker at level-0. The volunteer nodes are accepted or rejected based on specific criteria initially set at the start of simulation as given in Algorithm 1. These criteria include a requested node’s minimum energy level and mobility. Energy is the most predominant factor for consideration because a volunteer node must have efficient energy to be promoted as a fog node. If the incoming node meets the energy criteria, the mobility is checked on second priority. However, for static nodes, distance is computed for acceptance. For the mobile node, direction plays a significant role in accepting the proposal. The node is rejected if moving away from the broker node. The accepted proposals are added to the list of available resources.

figure a

Algorithm 1 Fog Promotion Algorithm

Task execution algorithm – The fog node, volunteer IoT device, or a dedicated fog node receives the incoming workload, which is initially stored in the input queue. The node pop the workload from the top of the queue executes it and sends the result back to the requesting node or fog broker. According to [51] this process is elaborated in Algorithm 2.

figure b

Algorithm 2 Task Execution at Fog

Resource sharing algorithm – The user nodes send two types of workload, high priority and low priority workloads. The broker nodes at the first-level fog nodes in the hierarchy manage two types of queues for incoming tasks: low priority and high priority. Also, the broker nodes manage lists of volunteer fog devices sorted according to their computational resources. Further, the broker manages a two-dimensional list where each row represents a fog level, and each column represents fog nodes at that level. This list is also sorted according to each fog node’s computational resources. The volunteer nodes and fog nodes periodically send beacons to broker nodes, updating the status of its resources, whereas the broker node updates the lists dynamically as shown in Algorithm 3. In this algorithm, lines 1-9 represent the high-priority workload that is tried to execute at the same broker node that minimizes the delay. Line 10-20 indicates the low-priority tasks that require QoS is the best-effort system struggles to place it at the nearest volunteer node. This is because the volunteer nodes have mobility, and there are chances of connection loss and retransmission; hence it might add additional delay. Line 21-34 covers the guaranteed quality of service through offloading the task to the dedicated fog nodes in the hierarchy.

figure c

Algorithm 3 Resource Sharing Algorithm

Evaluation

The proposed simulation framework is benchmarked on Ubuntu 16.04 LTC with variable sensors and IIoT devices that offload workloads defined in terms of Millions of Instructions Per Second (MIPS). The workload is offloaded to the level-1 fog device that is referred to as fog brokers. However, the IIoT devices also volunteer for their resources to level-1 fog nodes. The level-1 fog nodes promote these IIoT devices to fog nodes depending on multiple factors, including residual energy, computing power, and available storage. The fog nodes are placed on three levels. In the rest of the fog levels, nodes are enriched regarding computation resources. The last level fog nodes are connected to a cloud data center. The network has multiple same setups, and each is denoted as a distributively located fog location. The framework’s performance is evaluated in terms of CPU & memory usage, delay in constructing enriched GUI, network delay, workload computation time, workload acceptance ratio, and energy. The simulation parameters and system specification where simulation is deployed are given in Table 3, and Table 4 respectively. The simulation parameters in the table as presented in terms of range, e.g., the user nodes range from 50 to 500 means the nodes are increased to measure the impact on the overall performance of the proposed system. The graphs presented here show the total number of nodes, including level-1, level2, level-3, and volunteer nodes.

Table 3 Simulation parameters
Table 4 System specifications

Initialization delay – The proposed framework is developed on the top of OMNeT++ [52] which provides an enriched GUI environment to view running simulation. However, this GUI construction is a compute expensive task and creates an additional one-time delay computed for the proposed framework, as shown in Fig. 3. Furthermore, this delay increases with the number of network nodes.

Fig. 3
figure 3

One time component initialization and GUI construction delay

Memory and CPU usage – Memory and CPU usage are directly proportional to the number of network nodes. The results are obtained by combing all network components, including IIoT devices, Fog nodes, Cloud servers, and networking components like routers and switches. Figure 4 shows the memory and CPU usage increases with the number of nodes. This is because of the memory utilization with the increased number of different simulation modules and objects.

Fig. 4
figure 4

Memory and CPU usage of the system with respect to number of nodes (IIoT + Fogs + Cloud + Network Components)

Workload completion time – The workload completion is measured for two scenarios; horizontal and vertical placement. The horizontal placement means distributed fog locations, and vertical placement means the hierarchical placement of fog nodes on different levels and in cloud data centers, as shown in Fig. 5(a). The arrival rate varies, and workload execution time is measured by varying the number of nodes as shown in Fig. 5(b). The task size is kept constant (2000 MIPS) in both scenarios.

Fig. 5
figure 5

Workload completion time with different arrival rate and network setup

Network latency – The network delay and congestion depend on the number of users offloading workloads and the offloading frequency, as shown in Fig. 6. Figure 6(a) shows that the network latency depends on the fog placement provided that workload frequency is constant. If the fog nodes are in a hierarchical architecture, the delay is minimal; however, it increases if the fog nodes are in horizontal architecture (distributed). The latency of the workloads offloaded to the cloud is the maximum because an external network (internet) is used to offload workloads. However, the effect of workload frequency is shown in Fig. 6(b). The average network latency increases with the increase in workload arrival rate.

Fig. 6
figure 6

Network latency time with different arrival rate and network setup

Residual energy – In the simulation, every IIoT functions in one mode: user mode when it offloads workloads or volunteer mode when executing the received workloads. Each device joins the network with a predefined energy value that reduces over time depending on the usage. For example, in Fig. 7, the residual energy of a volunteer node and two arbitrary random user nodes are given. When the energy of a node reduces to 0, it becomes inactive.

Fig. 7
figure 7

Residual Energy of the IIoT devices in user mode where devices offload workloads, and volunteer mode when devices execute received workloads

Workload acceptance ratio – The proposed framework is compared with FogNetSim++ [17] in workload acceptance ratio. Fig. 8 shows that the proposed framework outperformed and achieved a higher acceptance ratio when the number of nodes was increased.

Fig. 8
figure 8

The workload acceptance ratio compared to FogNetSim++ [17] when FognetSim++ starts dropping the incoming workloads when the number of user nodes is increased, and the proposed framework keep accepting the workloads

Availability

The framework is developed using Omnet++ and Inet framework. It is an open-sourced project and is accessible in a GitHub repositoryFootnote 1.

Conclusion

To summarize, the inclusion of fog computing in IIoT shifts an IIoT system’s performance to the next generation networks. Low latency is essential in many control applications of an IIoT system, and fog computing can enhance the system efficiency and risk of damage by providing a quick response to the respective machinery. The proposed framework provides a general framework to simulate IIoT fog networks and resolve the resource under-utilization problem by upgrading unused resources to compute and fog resources at a local level that will reduce delay and ensure the availability of resources. The localization module of the framework helps find the location of a mobile node in the network, and this information helps to make better decisions by resource allocation algorithm. The framework’s efficiency is measured in terms of CPU, Memory, and GUI design delays. The performance of the resource sharing algorithm is compared with [17], and it is observed that cited framework starts dropping the incoming workloads after a specific time. In contrast, the proposed framework can manage resources better and receive workloads for a more extended time. Furthermore, in the future, additional rejection criteria can be added for volunteer nodes, such as the volunteer nodes being accepted or rejected based on specific security criteria, such as their level of trust. As a result, malicious nodes will not be able to disturb and attack the framework’s efficiency and performance.