1 Introduction

IoRT is a modern scientific paradigm incorporating various concepts, such as Cloud Robotics (CR), Artificial Intelligence (AI), Machine Learning (ML), and The Internet of Things (IoT) [1]. IoT integration with robotics is mainly aimed at enhancing movements, sensing, monitoring, and autonomous behavior in robotic systems. The IoRT market rise is driven by industrial, e-commerce applications, healthcare, household appliances (personal robots), and vehicle [2]. Many IoRT applications have real-time service needs that will have devastating consequences if the delay in response surpasses the tolerated latency. Moreover, due to limited resources, such as low battery capacity and limited computation, these devices cannot operate for a long period [3]. Therefore, energy consumption is also an essential measure that must be taken into account. Accordingly, how to enhance processing efficiency in real-time for enormous quantities of data with minimizing energy consumption is a major challenge.

Recent studies have shown that CR stands for a merger of robots with cloud technologies in order to extend the capabilities of robots using offloading techniques [4]. The idea of computation offloading was explored in order to decrease energy usage and improve the performance of task processing. However, it is important to note that conventional central clouds are typically located in remote locations, far from their consumers [5]. In addition, cloud robots are frequently unable to operate numerous robots because of their latency, bandwidth, and network congestion restrictions. Hence, long transmissions from IoRT devices to cloud servers might create delays and add to the cost of transmission energy [6]. Therefore, fog computing was invented and constructed to overcome the constraints of cloud computing. Fog computing is a concept that stretches cloud and networking services to the network edge, nearer to terminal devices with minimal data processing latency and high mobility, which may decrease the load on the main network and maximize energy savings. Fog computing puts both computation and radio resources closer to IoRT devices, boosting scalability in both areas [6].

Energy efficiency is extremely essential from the future standpoint of information communication technology (ICT) [7]. There is therefore much effort to minimize energy consumption nowadays. Computation task offloading is believed to be the way to reduce energy consumption [8]. There are two types of computation offloading: Firstly, the binary processing offloading approach, which means that the processing of tasks may be performed either locally or at the edge; and secondly, the partial computation offloading strategy, which considers that users can divide the tasks such that some are performed locally, and others are offloaded to the edge [9]. However, partial offloading is preferred because it achieves higher energy savings and lower computation latency than binary offloading. Computation offloading consists of three main steps: (i) data transmission step, (ii) remote processing step, and (iii) response step [10]. Ignoring the third procedure because the amount of data needed to represent the computation results and send them back to the end user is generally significantly lower than the amount of data necessary to offload the task [11].

This research aims to devise a new technique to offload tasks to meet the needs of IoRT latency-sensitive systems with the optimal use of the resources available on the fog-cloud platform. The objective is to reduce the overall service time and decrease the consumed energy of the IoRT systems. The major contributions of this paper can be summarized as follows:

  • Proposing a novel paradigm for the Internet of Robotic Things (IoRT) that is based on fog robotics, as opposed to the traditional approach of using cloud robotics. By leveraging the power of fog computing, we aim to enable robots to complete tasks within strict deadlines, while also ensuring energy efficiency and reducing the burden on the communication network.

  • Proposing a partial computation offloading scheme that uses a mathematical model to find the optimal solution, taking into account features of the application such as CPU requirements, network requirements, latency sensitivity, and resource usage to reduce the energy consumed by latency-sensitive IoRT apps that are based on the fog-cloud framework. The scheme optimizes the offloading ratio and transmission rate to minimize energy consumption while ensuring that task deadlines are met.

  • Proposing an Improved Hybrid Optimization technique based on GA Combined with PSO called (IHOGCP) to achieve efficient computation offloading. The proposed algorithm provides a trade-off between exploration and exploitation capabilities, which leads to better convergence and higher-quality solutions.

  • Conducting simulations to evaluate the performance of the proposed algorithm, and comparing it with full offloading and other state-of-the-art techniques. The simulation results show a 22.85% improvement in overall energy consumption, indicating the superiority of our algorithm over other existing techniques.

This research is outlined as follows: Sect. 2 provides background, including information about IoT and IoRT technologies. Section 3 shows relevant literature pertaining to the study subject. Section 4 discusses the proposed approach for IoRT based on fog computing. Section 5 provides the system model and the problem formulation. The methodology is presented in Sect. 6. Section 7 presents the proposed algorithm and describes GA and PSO separately. Section 8 presents the experimental analysis and results. InSect. 9, the conclusion and future work are presented.

2 Preliminaries and background information

2.1 Overview of the internet of things

The IoT principle allows a vast number of “things” that can be uniquely addressed to communicate and transmit data via existing internet or network protocols like TCP/IP. It produces massive amounts of data and processes vastly different amounts of data, which have yet to be seen [12]. It seeks to make human society intelligent, convenient, and efficient with huge environmental and economic benefits. Processing, energy, bandwidth, and storage restrictions are common in IoT devices. The Internet of Things might make human life easier, safer, and more intelligent. Many fields, including military, agriculture, manufacturing, healthcare, robotics, and nanotechnology, are benefiting from IoT's steady advancement [13]. Smart homes, smart cities, smart transportation and mobility, smart healthcare, smart factories, and smart manufacturing are just a few of the numerous applications available to users [14]. Table 1 shows the Internet of Things application domains and their properties [15, 16].

Table 1 Internet of things application domains and their properties

IoT technology provides a solid foundation for consumers to add intelligence to their current devices and connect them to the Internet to share information between devices. As theory evolves and contributes to the major technological developments in different fields of usage, new terminology has emerged, such as the Internet of Medical Things (IoMT), Internet of Nano Things (IoNT), Internet of Mobile Things (IoMBT), Internet of Cloud Things (IoCT), Internet of Autonomous Things (IoAT), Internet of Drone Things (IoDT), Industrial Internet of Things (IIoT), and Internet of Robotic Things (IoRT) and many more [17]. In recent years, the Internet of Robotic Things, which combines sensing devices and robotic systems, has gained prominence as a result of their widespread availability.

2.2 Overview of internet of robotic things

This section offers an introduction to the Internet of Robotic Things. It discusses the IoRT and its architecture.

2.2.1 Aims and motivation

Through IoRT technology, one can reduce the need for human intervention in several activities such as health care delivery, transportation, and marketing. This opens up a new world of opportunities for everything from military operations to economic applications [18]. The IoRT is classified according to the application categories in Fig. 1. It can be used in different zones as factory, social and outdoor applications. When it comes to situational and location awareness, the IoRT system has shown its value in a variety of different contexts, including decreasing power consumption while producing high-quality images, increasing system security, and even in clinical services and emergencies for providing medical supplies ahead of rescue teams. However, obstacles such as old network topologies, restricted bandwidth in Wireless Sensor Networks (WSNs), outdated system integrations, and battery life constraints prevent IoRT applications from realizing their full potential [19].

Fig. 1
figure 1

Classification of IoRT according to application categories

Several improvements have been brought about by robotic systems in different areas of human life. Many manufacturing industries are embracing robotics to conduct all kinds of complex, sensitive, and difficult activities, such as soldering, assembling items, inspecting products, wrapping, and many more. The integration of the smart world with independent agents (robots) has given rise to the Internet of Robotic Things. [20]. Smart world apps include smart homes, manufacturing, agriculture, buildings, and factories. The key aspect of these apps is the monitoring of conditions and operations within a specific control region. Agents can only perform well-defined tasks. These intelligent agents are known as robots and come in a variety of forms including service, mobile, and assistance robots [2]. The advent of intelligent agencies in the smart world completes the IoRT concept by merging the Smart world's functions and capabilities with those of robots. Robots are smart machines with sensors that can interpret real-world behavior by acquiring environmental data from sensors and taking appropriate action to resolve issues. The IoRT is a multi-robotic network that is smart, extremely effective, secure, and inexpensive. The IoRT refers to the collecting, analyzing, and utilization of sensor data from a number of sources using local and distributed data to monitor and control physical items. The integration of IoT and robotics in IoRT is mainly aimed to enhance movements, sensing, monitoring, and autonomous behavior in robotic systems.

CR is a term that refers to the use of cloud services and IoT technologies in the area of robotics [20]. This model is becoming more popular in scientific studies throughout the world because it makes it possible to use the robotic infrastructure in conjunction with IoT and the cloud. Therefore, Robotic capabilities can be enhanced by the use of high-performance cloud services at the backend thanks to CR, an interdisciplinary field combining distributed computing and robots. When it comes to cloud computing, robotics is a growing discipline that relies on cloud services and other technologies that allow robots to make use of current data centers' strong computational, storage, and communications capabilities connected via cloud infrastructures [17, 21]. IoRT shares significant characteristics with the IoT and CR, but it also has distinct differences. As a result, it has its own set of benefits and problems that must be met.

2.2.2 Internet of robotic things architecture

IoRT is an emerging discipline, and there have been many efforts to provide an IoRT architecture. The IoRT general architecture can be divided into five layers, as shown in Fig. 2 [17, 22, 23]. They can be described as Physical, Network, Internet, Robotic Infrastructure, and Application layers, respectively. The following section describes each of these layers.

  1. 1.

    Physical layer: The bottom tier in IoRT architecture is the physical layer. This layer is made up of various types of items such as vehicles, robots, home appliances, healthcare equipment, cell phones, sensors, planetary equipment, weather sensors, field equipment, underwater devices, and drones. A physical layer is mainly responsible for working in the environment with signal sensing, data gaining, and data transmission to the upper layer i.e., the Network tier [23].

  2. 2.

    Network Layer: Various network connectivity options are provided on the network tier to allow connectivity between sensors and robots, even for machine-to-machine interaction. In terms of cellular, Few-Short, and Moderate-Long ranges. Cellular connectivity includes 3G [24], LTE/4G, and 5G [25] which is allowed by this option. A few short-range communication technologies are used for easily communicating with robots close to each other, such as Wi-Fi, Bluetooth Low Energy (BLE) [26], 6LoWPAN [27], Broadband Global Area Network (BGAN) [28] and Near Field Communication (NFC) [29]. Moderate-Long range communication technologies such as the Worldwide Interoperability for Microwave Access (WiMAX) [30], Z-Wave, ZigBee [31], and LoRA [32] have all been integrated for the simple operation of information transmission.

  3. 3.

    Internet Layer: This single option allows for device connectivity and access to information everywhere. This layer contains protocols that perform a variety of network functions. Taking into account IoRT's architecture, the Internet layer is perceived as the core of any communication. Since IoRT uses IoT, different IoT structured protocols like UDP, IPv6, DDS, CoAP, LLAP, MQTT, and XMPP are used [33].

  4. 4.

    Robotic Infrastructure Layer: This architectural portion of the IoT-based CR stack is renovated to become the most precious layer of all. This layer is composed of 5 components, including robotic platform support, Machine-to-Machine-to-Actuator (M2M2A) cloud platform support, Internet of Things business cloud services, Big Data services, and Internet of Things cloud robotics infrastructure. Each of them should be discussed as follows:

  5. (a)

    Robotic Platform Support: This platform is used by scientists to create robots with rudimentary software abilities for doing experiments. It provides basic robotic service technologies like Robot Operating System (ROS), Robot Service Network Protocol (RSNP), Controller Area Network (CANopen), and Open Resource Interface for the Network (ORIN), etc.

  6. (b)

    Cloud Platform Support for M2M2A: This component accomplishes many tasks such as data gathering, data analysis, device monitoring, and control. The Machine-to-Machine-to-Actuator model, which is suitable for sophisticated robotic devices, i.e., IoRT, is envisioned.

  7. (c)

    Internet of Things business cloud services: By supporting enterprises and producers in lowering their total burden, IoRT will provide a platform upon which to build a variety of complicated business activities. All transactions under IoRT can be extended to various cloud computing models like SaaS, PaaS, and IaaS [34]. Both modular and packaged business-oriented APIs make e-commerce performance simpler.

  8. (d)

    Big Data services: It provides incredibly valuable operations in areas of optimization, forecasting, marketing, and statistical analytics, Big Data offers extremely value-added services.

  9. (e)

    IoT cloud robotics infrastructure: This part is designed to encompass services only. It allows robots to be equipped with a wide range of services, such as location identification, image and video processing, and communication management [20, 33].

  10. 5.

    Application Layer: This is the highest layer of IoRT architecture, it is designed to spread user experience by investigating a variety of robotics-related applications. When robotics is integrated with the Internet of Things, the cloud can be used in a range of applications, such as healthcare, infrastructure servicing, electronic commerce sites, retail stores, life-critical circumstances, data centers, Web Services Description Language interfaces, and others [23].

Fig. 2
figure 2

Internet of robotic things general architecture

2.2.3 Internet of robotic things difficulties and problems

There are a number of unresolved difficulties related to IoRT as well as suggestions for future research guidelines and directions on various IoRT technology and application subjects. The computational problem, scalability, and handling of data generated by a billion devices are some of the quality of service (QoS) metrics that are mentioned in this section. In order to summarize the most significant issues related to the IoRT, Table 2 is provided [35, 36].

Table 2 The key difficulties relating to the internet of robotic things

This research focuses on the energy efficiency of the computation offloading network, with the goal of reducing system energy consumption while meeting the latency requirements of the computing workloads. Both computational and communication energy are consumed in total energy consumption. Since IoRT is based on CR so, it suffers from a high latency response [4]. Real-time tasks are processed on the IoRT devices rather than offloaded to the cloud due to the high latency response of the cloud. In order to decrease the amount of energy consumed during processing real-time tasks on the mobile device, it is needed a new approach to perform these tasks with regard to their sensitivity [37]. Also, it is a very energy-consuming process to move massive data sets to a central cloud. Therefore, this study proposes using fog robotics instead of CR for achieving efficient-energy consumption.

Fog robotics is an expansion of “Cloud Robotics” that shares storage, computation, and resources between the cloud and the edge. The name “Fog Robotics” was coined by Gudi et al. (like Fog Computing) [38]. The Edge of the network is characterized by a limited number of heterogeneous resources, while the cloud may be considered an infinite pool of identical resources in remote centralized servers. The edge's resources include lightweight microservices, and networking devices such as gateways, routers, switches, and access points. The purpose is to leverage the available resources in both the Edge and the Cloud to improve latency, bandwidth, reliability, and energy efficiency [39]. Therefore, the study will move from centralization to decentralization using fog computing.

Cisco has introduced fog computing [40]. It is an expansion that is based on the cloud, and it is located much closer to the Internet of Robotic Things systems. It gives the edge of the network the ability to compute, communicate, store data, and provide services. Fog's distinguishing features include its closeness to end devices, its widespread geographic dispersion, and its locomotion-supporting. Fog computing has merits such as: reducing the amount of data that is sent to the centralized servers; delay reduction; reducing the bandwidth cost; increasing security; and enhancing services for remote locations [41]. Now, one can introduce a new definition of IoRT as it is a modern scientific paradigm incorporating various concepts including Fog Robotics (FR), Artificial Intelligence (AI), Machine learning (ML), and The Internet of Things (IoT).

3 Related work

Inconsistency of sensing equipment's limited computing resources with demanding real-time processing tasks results in huge amounts of energy use and inadequate performance. In recent years, computation offloading has been an important hotspot for these difficult challenges because of its benefits of timeliness and energy efficiency [42]. Hence, in this section, some related works will be reviewed to address the energy consumption by the computation offloading technique. Offloading strategies have been suggested in the literature in a variety of forms. There are two types of offloading strategies: (a) performance-based offloading strategies [43,44,45,46,47,48,49] and (b) energy-performance-based offloading strategies [42, 50,51,52,53,54,55,56,57,58,59].

3.1 Performance-based offloading

The performance-based offloading technique aims to improve end-user processing time. Hence, in the following section, studies that enhance the computation offloading execution time will be discussed. Sheng et al. [43] investigated the relationship between differential uploading delay and co-channel interference. In addition, they suggested a computational offloading method that aimed to lower the customers' average delay by concurrently optimizing the decision on the offloading and allocating resources. To address the challenge of reducing the total completion period, Li et al. [44] suggested a combined optimization technique established on the Genetic Algorithm (GA) for offloading a proportion of the task, link bandwidth, and mobile edge server (MES) computational elements. This study is simple because it only looked at the time it takes to complete the task and ignored energy consumption. Joint Partial Offloading and Resource Allocation (JPORA) is a significant approach presented by Saleem et al. [45]. JPORA iteratively changes data segmentation to reduce task execution delay by allocating QoS-aware communication resources to cellular lines and interference-aware resources to D2D links. A target expression with a constraint item was obtained by Li et al. [46] by transforming the task offloading problem of fog computation into the matching problem between task and node. However, the improved differential evolution (IDE) technique was presented to address the task offloading issue.

Miao et al. [47] suggest a method for decreasing the processing duration of applications through an algorithm that involves the migration of tasks and the offloading of computation. The algorithm involves several components, including a task prediction mechanism based on the LSTM algorithm, a computation offloading strategy for mobile devices that utilizes task prediction, and a task migration scheme for edge cloud scheduling. These components work together to improve the efficiency of the edge computing offloading model. For mobile device users who have enough energy and solely care about the computation time overhead, Zhou and Jadoon [48] suggested a partial computation offloading technique based on game theory for multi-user edge computing. A digital twin edge network (DITEN) was suggested by Sun et al. [49]. Through deep reinforcement learning, an approach is presented to reduce the offloading latency.

3.2 Energy-performance-based offloading

The objective of an energy-performance-based offloading technique is to decrease the energy consumption and task processing duration of end-user devices. As a result, computation-intensive tasks need to be carried out on remote servers. Cheng et al. [50] presented a task offloading decision technique for Internet of Vehicle-based edge computing with Particle Swarm Optimization (PSO). The PSO was performed in order to transform task offloading into the process and achieve the best offloading strategy. The suggested approach for offloading can efficiently decrease the terminal devices' energy consumption while ensuring user service quality. Han et al. [51] utilized nonorthogonal multiple access (NOMA) so that a user is able to delegate tasks to numerous adjacent devices with idle computer resources. Through the use of a decomposition strategy, they proposed a two-tier optimization scheme called Multitask Joint Computation Offloading and Resource Allocation (MT-JCORA). Transmission Scheduling and Computation Offloading (TSCO) was developed by Hazra et al. [52] to meet energy and delay requirements in a fog environment. In addition to that, they presented an approach for task offloading that is based on graphs and utilizes constrained-restricted mixed linear programming. To address the issue of energy-efficient and delay-aware task allocation problems, Singh et al. [53] introduced the Energy-efficient task offloading strategy (EETOS) based on the Levy-flight moth flame optimization (LMFO) method. Huang et al. [54] suggest a method called Multi-Objective Whale Optimization Algorithm (MOWOA) that takes into account time and energy consumption to determine the best computation offloading mechanism. Yan et al. [55] have addressed the challenge of jointly optimizing task offloading and resource allocation, taking into account both energy consumption and execution time. They have also demonstrated that optimal offloading decisions follow a “one climb” policy, which enabled the development of a Gibbs sampling algorithm with reduced complexity to determine the optimal offloading decisions. Gu et al. [56] proposed a method for offloading computations and allocating transmit power that uses less energy and takes less time to complete. By using the offloading of computation dynamically and transmission power allocation for MEC, they created an energy efficiency cost reduction issue that fulfills Mobile devices' completion time deadline restriction.

With a supervised deep learning approach, Abbas et al. [57] examined the partial offloading strategy in mobile edge computing (MEC). They effectively chose the partial offloading strategy and the amount of each unit of the task to decrease service latency and energy usage of the user equipment using the suggested approach, the complete and energy-efficient deep learning-based offloading method (CEDOT). By using a collaborative computing paradigm that addressed the vertical and horizontal cooperation across fog and cloud nodes while reducing total end device energy according to service latency limitations. In hierarchical fog-cloud systems, Nguyen et al. [58] developed innovative and effective methods for combining the compression of data and computation discharging to reduce energy and delay costs. In particular, they looked at scenarios in which data compression only was utilized by end devices and both end devices and the fog node. In Multiple user fog computing platform, Chang et al. [59] developed an energy-efficient approach for computation offloading. They believed that users must decide whether or not to discharge processes to the nearest fog node, depending on energy usage and latency constraints. They devised an energy-efficient improvement solution to minimize energy consumption, subject to delay limitations. Chen et al. [42] devised a fog-assisted IoT network data aggregation computation offloading strategy that was both privacy and energy conscious. The aim of the suggested strategy is to reduce the overall energy usage of computing tasks that provide a secure three-layer computing architecture and an energy-efficient offloading decision technique based on the momentum gradient descent. Table 3 summarizes several techniques used in related work.

Table 3 The summarization of the computation offloading researches

Furthermore, many of the previous related works in the literature review focus solely on either performance and ignore energy consumption or energy performance but suffer from some limitations such as scalability, heterogeneity, and complexity. In contrast, our proposed approach, which combines the genetic algorithm with Particle Swarm Optimization, achieves a balance between energy efficiency and performance. By applying partial offloading of the computational tasks between IoRT devices and fog nodes, our approach reduces energy consumption while maintaining a high level of performance. Moreover, our approach is designed to address the limitations of previous studies, such as homogeneity, complexity, and scalability. Therefore, our approach contributes to the development of more practical and effective solutions for energy-efficient and high-performance computation offloading in fog-based IoRT networks.

4 The proposed approach

This section focuses on explaining the proposed approach to reduce the energy consumption of IoRT in more detail. The approach is based on developing an IoRT hierarchical architecture for deploying IoRT applications into the three-layer fog landscape. Then, the proposed framework is introduced to minimize energy consumption using a partial task offloading strategy.

4.1 The proposed IoRT architecture

Figure 3 shows the hierarchical architecture of IoRT based on fog computing technology to help IoRT devices perform better and use less energy. The hierarchical architecture consists of the following three layers:

  1. 1.

    Hardware Layer: It consists of various IoRT devices. Here, we solely employ IoRT devices for intelligent sensing. Typically, these devices are geographically dispersed. It is accountable for detecting and transmitting the perceived data of physical objects to the layer above for analysis and storage. These devices are connected to upper layer nodes by wireless technologies like SigFox, Wi-Fi, ZigBee, Bluetooth Low Energy (BLE), 4G, and 5G [2]. This layer is responsible for taking the offloading decision by selecting the most acceptable task offloading ratio α.

  2. 2.

    Fog Layer: This layer lies close to end devices at the edge of the network. It consists of a variety of nodes that typically involve routers, gateways, switches, access points, base stations, fog servers, etc. These nodes have the power to process, transfer, and store sensed data. In the fog layer, real-time analysis and latency-sensitive applications can be done. In addition, the fog nodes are linked to the centralized cloud servers to achieve more efficient computing and storage capabilities [42].

  3. 3.

    Cloud Layer: It is composed of many highly efficient servers and storage units and offers diverse applications. It has strong computational and storage resources for the comprehensive processing and storing of a large volume of data [60].

Fig. 3
figure 3

The hierarchical architecture of IoRT based on fog computing

4.2 The proposed IoRT framework

To reduce the energy consumption of IoRT devices, the energy consumed in the computation process will decrease by offloading a ratio of a task to fog nodes and the rest of the task will be processed locally on the IoRT devices. Also, IoRT devices generate data that differ in size, e.g. data transmitted from sensors (e.g. Ultrasonic sensors) usually are less than images transmitted from cameras. The variety of the data packages impacts the behavior of the fog node during the processing event. Heavy data packages thus take more time than light data packets to process. On the other hand, the computation capacity of fog nodes is weaker than that of cloud servers [41]. Therefore, to achieve a low latency response and reduce the power consumed in the idle state sector at the same time, we should apply the collaboration concept in the fog layer to process heavy tasks faster. Therefore, if the fog node accepts a request based on of its current load, it either performs the whole request or begins handling the application portion and transfers the remainder of the request to a separate node (in case of heavy package size). Figure 4 demonstrates the proposed IoRT based on the fog framework that will be suitable for time-sensitive and efficient-energy consumption applications [61].

Fig. 4
figure 4

Internet of robotic things based on fog framework

5 System model and problem formulation

A new energy modeling method for IoRT devices is suggested in this section to increase energy efficiency. The robot will quantify and forecast energy consumption by using the energy model as a reference to energy-saving strategies. The IoRT device's power usage takes into account six key factors: the transmission cost, the computation system cost, the idle state cost, the sensor system cost, the control system cost, and the movement system cost, as shown in Fig. 5. Figure 5 represents the top six research challenges related to the design of energy-efficient IoRT systems [62, 63].

Fig. 5
figure 5

The top six design considerations for energy-efficient IoRT devices

Firstly, since IoRT is based on fog computing, tasks may be executed on the IoRT device or moved to the fog nodes and implemented there. Afterward, the results may be sent to the device, allowing further operations to be carried out. Deciding whether the computation process on mobile devices or fog is best applied is a significant matter because it has a huge impact on the use of energy. Also, the gathered data from sensors will be transferred to the fog and this will consume energy in the communication process. The amount of energy used to send data over a network will depend on the amount of data and the data transmission rate. Secondly, IoRT devices need to describe and recognize the surrounding environment, so it is connected to many sensors such as GPS, ultrasonic, cameras, encoders, gyroscopes, compasses, etc. Also, these sensors affect the energy consumed, so it must be considered when calculating the energy of the robot. Finally, the control circuit board and motion system will affect the robot’s energy. Equation (1) represents the consumed energy based on the previous scheme [63].

$$E_{total} = E_{trans} + E_{comp} + E_{idle} + E_{sensor} + E_{control} + E_{mov}$$
(1)

In this article, we apply partial computation offloading, so we mainly take into account computation and transmission energy and the rest of (1) will not be affected and assumed as Δ as shown in (2).

$$E_{total} = E_{trans} + E_{comp} + \Delta$$
(2)

In the remainder of this part, the content of the computation and transmission energy sectors will be examined in detail. Let's assume having a system consisting of K end users (EU) scattered in the system at random, and each end user is expressed as \({EU}_{k}\), a fog node, and a distant cloud server, as demonstrated in Fig. 6. Each EU has a computation task \({T}_{k}\) = {\({M}_{k}\),\({C}_{comp}\),\({P}_{comp}\)}, \({M}_{k}\) explains the size of the task to be done, \({C}_{comp}\) illustrates the computation capacity of the \({EU}_{k}\) which is the number of processed bits in a second, and \({P}_{comp}\) represents the computation power for implementing one cycle at the \({EU}_{k}\) in Watt, where k \(\in\) (1,2,3,……8).

Fig. 6
figure 6

System model

By applying partial offloading each \({EU}_{k}\) must offload \({\mathrm{\alpha }}_{k}\) the ratio of tasks to the fog node for computation, and this offloading ratio must be greater than 0 and less than or equal to 1, leaving the rest of the fractional processing tasks (\(1-{\mathrm{ \alpha }}_{k}\)) to be processed locally, hence the local computation energy \({E}_{comp}\) and local computation time \({T}_{k,local}\) can be calculated as:

$$E_{comp} = P_{comp} *\frac{{\left( {1 - { } \propto_{k} } \right){ * }M_{k} }}{{C_{comp} }}$$
(3)
$$T_{k,local} = \frac{{\left( {1 - \propto_{k} } \right)* M_{k} * L_{k} }}{{C_{comp} }}$$
(4)

where \({L}_{k}\) represents the CPU cycles needed to process 1bit of the \({EU}_{k}\).

Frequency division multiple access is used for uploading the offloading data. The system's total bandwidth is \({B}_{total}\) and the bandwidth assigned to \({EU}_{k}\) is \({b}_{k}\). So, in the transmission term, energy usage involves transmitted data size, network transmission rate \({R}_{k}\), and the amount of energy needed to transmit network data \({e}_{send}\) as shown in (5).

$$E_{trans} = \frac{{ \propto_{k} * M_{k} }}{{R_{k} }}*e_{send}$$
(5)

Energy consumption for transferring data in the network \({e}_{send}\) is calculated according to the Shannon equation, and the transmission rate of the \({EU}_{k}\) may be expressed as:

$$R_{k} = b_{k} \log_{2} \left( {1 + \frac{{e_{send} h_{k}^{2} }}{{N_{0} b_{k} }}} \right)$$
(6)

where \({h}_{k}\) represents the wireless channel gain and \({N}_{0}\) represents the spectral density of the channel noise power. Then the energy consumption for data transmission in the network can be indicated as:

$$e_{send} = \frac{{N_{0 } b_{k} }}{{h_{k}^{2} }} \left( {2^{{\frac{{R_{k} }}{{b_{k} }}}} - 1} \right)$$
(7)

The final form of the transmission energy consumption and the uplink time is expressed below:

$$E_{trans} = \frac{{N_{0 } b_{k} }}{{h_{k}^{2} }}\left( {2^{{\frac{{R_{k} }}{{b_{k} }}}} - 1} \right)* \frac{{ \propto_{k} * M_{k} }}{{R_{k} }}$$
(8)
$$T_{trans} = \frac{{ \propto_{k} * M_{k} }}{{R_{k} }}$$
(9)

Finally, the total computation time of the offloaded tasks consists of the offloading transmission time and the fog processing time. Therefore, the final form of the total energy consumed during the offloading process for all user devices in the system and the total computation time of the offloaded task can be expressed, respectively, as:

$$E_{total} = \mathop \sum \limits_{k = 1}^{j} \frac{{N_{0 } b_{k} }}{{h_{k}^{2} }} \left( {2^{{\frac{{R_{k} }}{{b_{k} }}}} - 1} \right) * \frac{{ \propto_{k} * M_{k} }}{{R_{k} }} + P_{comp} * \frac{{\left( {1 - \propto_{k} } \right) M_{k} }}{{C_{comp} }} + \Delta$$
(10)
$$T_{k, offload} = \frac{{ \propto_{k} * M_{k} }}{{R_{k} }} + \frac{{ \propto_{k} * M_{k} * L_{k} }}{{F_{comp} }}$$
(11)

The primary goal of the proposal presented in this article is to reduce the total amount of energy consumed while meeting the delay requirement and improve the performance of all IoRT devices by implementing a flexible network model based on fog computing technology. The IoRT system efficiency optimization problem has several competing objectives that must be optimized against a set of possible and achievable solutions that are not predetermined but proposed by a set of boundaries and limitations. In order to reach the proposed goal, we leverage the QoS concepts of computation time and energy consumption. They are characterized as follows:

$$Optimal\,solution = \mathop {\min }\limits_{{\left\{ { \propto _{k} andR_{k} } \right\}}} E_{{total}} = \left\{ {\begin{array}{*{20}c} {\mathop {\min }\limits_{{\begin{array}{*{20}c} {0 \le \alpha _{k} \le 1and} \\ {\sum\limits_{{k = 1}}^{j} {R_{k} } \le B_{{total}} } \\ \end{array} }} \sum\limits_{{k = 1}}^{j} {\frac{{N_{0} b_{k} }}{{h_{k}^{2} }}} \left( {2^{{\frac{{R_{k} }}{{b_{k} }}}} - 1} \right)*\frac{{ \propto _{k} *M_{k} }}{{R_{k} }} + } \\ {\mathop {\min }\limits_{{\begin{array}{*{20}c} {0 \le \alpha _{k} \le 1and} \\ {\sum\limits_{{k = 1}}^{j} {R_{k} } \le B_{{total}} } \\ \end{array} }} \sum\limits_{{k = 1}}^{j} {\left( {P_{{comp}} *\frac{{\left( {1 - \propto _{k} } \right)M_{k} }}{{C_{{comp}} }}} \right)} } \\ \end{array} } \right\}$$
(12)

where \({T}_{k,max}\) = max [\({T}_{k,local}, {T}_{k,offloaded}\)] ≤ T.

In (12), the ideal scenario for optimal or near-optimal consumed energy in the first half is to reduce the consumed energy in the transmutation in order to get the best possible result (i.e., reducing the amount of energy needed to transport the data packet to its destination and back to its sender). The consumed energy in the first half can be optimized by getting the optimal value of the transmission rate \({R}_{k}\) and offloading ratio \({\propto }_{k}\) of the data. The second half of (12) represents the optimal scenario in terms of reducing energy consumption during local computing of the portion of the task (1- \({\propto }_{k}\)) depending on the offloading ratio.

In the cost function (12), the goal is to modify the parameters \({\alpha }_{k}\) and \({R}_{k}\) to minimize the overall energy consumption \({E}_{total}\). The following are a few constraints to keep in mind:

  1. a.

    The device k offloading ratio should be between 0 and 1.

  2. b.

    Ensures that the total data transmission rate of each device in the sub-region does not surpass the channel bandwidth value.

  3. c.

    Guarantees that the processing time after offloading procedure must be less than or equal to the duration when all the tasks are computed locally and meet the task deadline.

6 The proposed methodology

Next, the flow of an Improved Hybrid Optimization technique based on GA Combined with PSO named IHOGCP will be discussed. Figure 7 illustrates the main steps of the IHOGCP algorithm. The IHOGCP algorithm begins with the random population generation of individuals, defines a certain number of iterations, and determines the algorithm parameters. The initialized population is fed into the GA algorithm for the specified number of iterations. Each solution is represented by a chromosome, which is a fixed-length string. A chromosome consists of numerous genes, and the overall chromosomal count indicates the population size. A fitness function is used to evaluate the quality of one solution over another. Several operators, including selection, mutation, and crossover, are employed during the development process to improve solutions passed to the next generation. In consecutive iterations, solutions are developed until a suitable criterion, i.e., a maximum number of generations or a preset fitness value is reached.

Fig. 7
figure 7

The flowchart of the IHOGCP algorithm

Following that, PSO's particles are initialized once the GA-optimized solutions are supplied, in which the position of the particles reflects the GA population's solutions and the velocity of each particle is initialized at the beginning. Position update and velocity update are the main two operators in the PSO scheme. To find the optimum particle position, the two “best” values, pBest and gBest, are modified. Based on these two “best” values, the particle's position may be changed by altering its velocity dynamically toward the optimal value of calculation, known as the global optimum. Each computation updates the velocity and position of each particle. The optimal position of the particle, known as pBest, is assessed for every particle of the swarm. If the present position is better than the prior one, the particle position shall be altered. The old position shall otherwise be maintained. Afterward, determine the gBest, which is the particle's best position in the whole population. These four stages are continued until a stopping condition is satisfied, indicating that the population's particles are in the best position. Finally, solutions are continuously optimized until the maximum number of generations is achieved.

7 Proposed algorithm formulation

The optimization problem (12) cannot be solved by standard optimization techniques such as the Lagrange multiplier method or the KKT condition, since it is a non-convex problem. This problem is well suited to the usage of meta-heuristic algorithms, as meta-heuristic algorithms are a type of computational intelligence paradigm that is used to address sophisticated optimization problems. Due to their robustness and ability to manage non-linearity and discontinuity, meta-heuristic optimization algorithms are extensively used. The meta-heuristic algorithms are including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and others. In any case, each algorithm has its own advantages and disadvantages. For example, PSO's convergence is rapid, but it can be locked into locally optimum solutions when it is used to solve complicated problems with high-dimensional solution spaces. In addition, genetic algorithms have a great ability for global search and various solutions, but their convergence process is time demanding. This study incorporates the benefits of these two algorithms and proposed an Improved Hybrid Optimization technique based on GA Combined with PSO named IHOGCP, as shown in Algorithm 1. Following that, we'll go through the algorithm's sequence, followed by the GA and PSO used in this study.

figure a
figure b

PSO is used to increase the performance of GA. As demonstrated in Algorithm 1, we utilize GA as the algorithm's foundation, while PSO is used to enhance the solution provided by GA. Algorithm 1 starts with a population initialization randomly. Then, the individuals are handled by GA operations in Algorithm 2 for \({I}_{1}\) iterations. Following that, PSO's particles are initialized once the GA-optimized solutions are provided from Algorithm 2 and perform the PSO operations in Algorithm3 for \({I}_{2}\) iterations. The individuals are repeatedly optimized using GA and PSO until convergence or the maximum number of generations is reached.

figure c

7.1 GA schema

In order to tackle complicated problems, GA uses a random search strategy that mimics biological evolution, which uses the notion of survival of the fittest as its evolution concept. Selection, crossover, and mutation are the three fundamental operators in a typical GA. Each individual is represented by a chromosome, which is a string (typically binary or decimal) that encodes the solution. The selection operator replicates chromosomes of higher quality to pass on to the next generation in order to improve fitness values in the population as a whole. Evolutionary populations benefit from mutations and crossovers, which give them different, but presumably higher-quality genetic resources.

7.1.1 Chromosome structure and selection operation

The initial stage in solving the problem is chromosomal encoding, which is highly dependent on the problem. Figure 8 shows the procedure of chromosome encoding. The population matrix is made up of a number of individuals, each individual has a set of genes. {\(\propto\), R} becomes a gene for the individual in the population. In each iteration, the individual with the lowest energy consumption is selected as the optimum individual. There are two widely used ways for the selection operation: roulette wheel and tournament. This study adopts the tournament since the selection technique for the roulette wheel is more suited for maximization problems. In tournament selection, choosing K individuals from the population randomly and selecting the fittest to become a parent. For picking the new parent, an identical process is performed. The crossover and mutation procedures are used by parents to generate offspring to enhance variety and provide better solutions to this problem.

Fig. 8
figure 8

Chromosome encoding method

7.1.2 Crossover and mutation operation

Like reproduction and biological crossover, the crossover operator works in the same way. This includes the selection of more than one parent and the production of one or more offspring utilizing the parents' genetic material. The goal is to produce a bigger offspring pool with higher fitness levels. The concept underlying reproduction is that the offspring is even more fitting by combining some features of two already suitable individuals, as it may be best inherited from each of their parents. Therefore, crossover aids in optimization and enhancing convergence. Crossover is carried out when the parents are interchanged by picking a random point on the chromosome. After then, the crossover creates a new offspring depending on the exchange point selected with certain portions of the parents. Figure 9 shows an example of a crossover operation. The GA has a probability of crossover, which decides whether crossover will occur or not. From practical and theoretical research, results point to a substantially greater likelihood probability \({P}_{c}\) of crossover in the 0.6–0.95 range.

Fig. 9
figure 9

Crossover operation

In short, mutation may be described as a little random chromosomal modification in order to find a new solution. It is utilized in order to preserve and add variety to the population and is generally employed with low probabilities \({P}_{m}\) around 0.0001–0.05. This operator generates new adaptive solutions by applying modifications randomly to one or more “genes” in order to produce new offspring, thereby creating new adaptive individuals that prevent local optimum. For the integer representation, a random value is assigned to a randomly selected gene from the list of possible values. Figure 10 shows the exact procedure of the operation. A new random value is assigned to the mutation if the modified gene is above the limit of the constraint.

Fig. 10
figure 10

Mutation operation

7.2 PSO schema

One of the approaches for evolutionary computation is Particle Swarm Optimization (PSO). The PSO is a population-based optimization approach inspired by bird flock and school fish movements. This approach searches for the best solution using agents, called particles, which have a stochastic and a determinist component to adapt their paths. The population of moving particles is called a swarm. Particles only have two properties: position and velocity. Each particle adjusts its position in the search space and enhances its speed in response to its own and other particles' movement experiences in order to achieve a better suitable position. The particle's position in PSO represents the solution in the GA population, and the velocity shows the rate of change in the position of the particle. Each particle has a record that recalls its prior best position and is termed the particle's best position as (pBest). Each particle has a pBest, and the particle with the highest fitness value is referred to as the group's best position (gBest). Hence, the “best” position of each particle, as well as the “best” position of the group, have an influence on the particle. Suppose that the search space is S-dimensional, and the ith individual in the swarm may be expressed by an S-dimensional vector \({X}_{i}\) = \(\left({X}_{i,1}, {X}_{i,2}, \dots , {X}_{i,S}\right)\). Another S-dimensional vector can be used to describe the particle's velocity \({V}_{i}\) = \(\left({V}_{i,1}, {V}_{i,2}, \dots , {V}_{i,S}\right)\). Each particle adjusts its position according to the new velocity with each iteration. During the iteration procedure, the velocity and position of a particle are updated as (13) and (14).

$$V_{i,d}^{t + 1} = \omega V_{(i,d)}^{t} + r_{1} .c_{1} (pBest_{(i,d)}^{t} - X_{(i,d)}^{t} ) + r_{2} .c_{2} (gBest_{(i,d)}^{t} - X_{(i,d)}^{t} )$$
(13)
$$X_{i,d}^{t + 1} = X_{i,d}^{t} + V_{i,d}^{t + 1}$$
(14)

Here, t = iteration number, d = 1,2, 3, …, S; i = 1, 2, …, M, where M is a swarm size. ω, c1, c2, r1, and r2 are, respectively, inertia weight which means the particle's weight at its prior speed, accelerator constants, and two random parameters within [0, 1] to increase the randomness in the search. A particle is guided to its optimal position by a set of acceleration parameters, which are represented by c1 and c2, which are two constants. It's conceivable that the optimal answer will be neglected if the values of c1 and c2 are large, resulting in a fast search. It is possible that the search time will be slow if the values of c1 and c2 are low, and a local optimum may be found. Thus, we assume that c1 = 2 and c2 = 2.

8 Performance evaluation

The numerical experiments in this section will be based on the system model presented in Fig. 6 and the algorithm provided above. All experiments are executed on a PC with a Windows 10 operating system and 12 GB of RAM, and the algorithms are programmed in MATLAB 2021a.

We will suppose that the sub-area is equipped with a fog node and eight IoRT devices. Specifically, each fog node has a computation capacity of 15 GHz, whereas the computation capacity of each IoRT device is designed to be in the range of 0.1–1GHZ. Furthermore, the amount of CPU cycles required to process one bit of data is assumed to equal 1000 cycles/bit. Assuming that the spectrum is partitioned into k channels, with the bandwidth of each channel being identical, we set the channel bandwidth \({{\varvec{b}}}_{{\varvec{k}}}\) = 5 MHZ and the white Gaussian noise to be \({{\varvec{N}}}_{0}\)= \({10}^{-7}\) W in the communication environment. Each end user has a unique task that has to be calculated, and the data amount of the computing task is chosen at random from 1 to 20 Mb for each end user. In Table 4, we summarize the simulation parameters.

Table 4 Simulation parameters

Figure 11 shows the simulation results for the total energy consumption with the number of iterations for eight end devices. As a first step, we ran our system through a variety of iterations to evaluate how the number of iterations influences the energy consumption of IoRT devices. In the beginning, the cost of the end devices decreased rapidly. In the following rounds, as seen in Fig. 11, the rate of decline was reduced as the number of repetitions increased. From Fig. 11, it can be observed that the suggested algorithm's overall energy consumption tends to converge after 90 iterations. For instance, increasing the number of iterations from one to ninety resulted in a drop in energy consumption by 18 percent from, 21,684 to 17,810 (mJ). Following that, the rate of decline reduced when the number of generations was above ninety, as seen in Fig. 11.

Fig. 11
figure 11

Total energy consumption with iteration times

The energy consumption of the IoRT devices was estimated and compared for three different situations in order to evaluate the proposed approach performance versus different task sizes ranging from 1 to 15 MB. Figure 12 depicts the effect of increasing the task size on the overall amount of energy consumed throughout the task computation in the three different scenarios. The first situation was one in which all the tasks are offloaded remotely. Tasks in the second situation were all performed locally. The IHOGCP was implemented in the third situation. Figure 12 shows that the energy consumption rose as the size of the task increased. This indicates that the proposed algorithm has the lowest energy consumption of the three situations that were tested. Local computing, on the other hand, has the largest energy consumption. In contrast to local computing, the full-task offloading technique uses more energy in task transmission, although it costs less energy than local computing. For instance, the suggested approach used 22,035 (mJ) of energy when the task was 12 MB in size. When comparing full-task offloading and local computing, the amount of energy consumed was reduced by 28.3 percent and 52.2 percent, respectively.

Fig. 12
figure 12

The impact of task size on energy consumption

Then, as seen in Fig. 13, the performance of IHOGCP has been analyzed and compared to the Genetic Algorithm (GA) [64], Particle Swarm Optimization (PSO) [65], Whale Optimization Algorithm (WOA) [66], Artificial Bee Colony (ABC) [67], Ant Lion Optimizer (ALO) [68], Grey Wolf Optimizer (GWO) [69], and Salp Swarm Algorithm (SSA) [70]. According to the perspective of convergence, the PSO is the best option, followed by IHOGCP, then GA, and finally WOA. However, the other four algorithms take longer to converge, requiring more than 500 iterations to reach their convergence points. However, in terms of energy consumption, IHOGCP is the most efficient, whereas PSO is the most inefficient. GA's performance falls somewhere in the middle of that of IHOGCP and PSO. This is due to the fact that IHOGCP combines the benefits of GA and PSO, with GA being better at searching the global domain and PSO being faster at convergence. This means that compared to the single GA or PSO, IHOGCP performs better. For example, in the proposed approach when the number of iterations increased from 1 to 90 resulted in a drop in energy consumption by 18% in contrast to GA makes convergence after 322 iterations to reach the same result approximately. However, the PSO makes convergence after 20 generations but has the highest energy consumption. When comparing the proposed approach to the PSO algorithm, the consumed energy by the proposed approach has been reduced by 8 percent compared with PSO. Finally, due to the whale optimization algorithm suffering from slow convergence speed, low precision, and a propensity to slip into local optimums, it achieves convergence after 386 iterations and consumes 6 percent more energy than IHOGCP.

Fig. 13
figure 13

Convergence comparison between IHOGCP, GA, PSO and WOA

The best results obtained by various algorithms on eight IoRT devices in terms of task offloading ratio and transmission rate are shown in Table 5. One can see that each device has its own offloading ratio and transmission rate, which differs from the other devices depending on the amount of the task being performed. It can be easily observed that the IHOGCP approach has the best results compared with other approaches under different task sizes in MATLAB experimental environment.

Table 5 Offloading ratio and transmission rate comparison for eight IoRT devices using four different approaches

The influence of IoRT devices' local computation capacities on energy consumption and task offloading ratios in different schemas is illustrated in Fig. 14. According to the task sizes in Fig. 14a, we set them identical across all IoRT devices and equals 13 MB. It is obvious that as local computation capacity increased, total energy consumption decreased, and the offloading ratio decreased as well because of that IoRT devices can process a large ratio of tasks locally to achieve the deadline constraints.

Fig. 14
figure 14

Energy consumption versus local computation capacity of IoRT devices

Clearly, the energy consumption in all schemas was near and substantial when the local computation capacity was as low as possible and equivalent to 0.1 GHz. As a result, the task-offloading ratio was as high as possible in order to complete the majority of tasks at the fog node as quickly as possible. However, when local computation capacity increased, the proposed approach was more efficient than alternative approaches and achieved a lower offloading ratio, owing to the fact that the IoRT's devices, which have high computation capacity, can complete tasks in a short period of time.

On the other hand, on each IoRT device, task sizes were randomly selected according to Fig. 14b. It's obvious that our proposed approach consumes less energy than others at different local computation capacities. However, we show that when a small amount of local computing capacity is used, the difference in energy consumption between algorithms is minor, and the difference arises when the amount of local computing capacity is increased. For instance, at 1 GHz computation capacity, the PSO algorithm consumes the most energy (19,255 mJ), followed by the WOA algorithm (18,916 mJ). In contrast, our suggested approach, GA algorithm, and ABC algorithm were found to use the least energy (17,804 mJ). Additionally, three other algorithms were tested, and they consumed energy as follows: Algorithm GWO used 18,454 mJ, Algorithm SSA used 18,410 mJ, and Algorithm ALO used 18,591 mJ. Overall, the additional algorithms performed better than PSO and WOA but worse than our proposed algorithm. Hence, it is noticed that the suggested algorithm performs well under high computation capacity in terms of its speed of convergence and energy consumption.

Figure 15 shows the energy consumption of various schemas as a function of the number of IoRT devices, ranging from two to thirty-two. As illustrated in Fig. 15, IHOGCP has a low computational cost in comparison to other algorithms. It's easy to notice that as the number of IoRT devices decreases, the difference in energy consumption between the algorithms in comparison becomes nearly non-existent. Nevertheless, when the number of IoRT devices increases, the difference in energy consumption between the comparison algorithms becomes obvious and the proposed approach becomes more efficient than the others. This is because, according to PSO and WOA, the more devices present, the greater the likelihood of becoming trapped in the local optimum.

Fig. 15
figure 15

Comparison of the energy consumption in different schemas under different numbers of IoRT devices

The elapsed time in the run for our proposed algorithm and the other three algorithms in different three scenarios is illustrated in Fig. 16. The three scenarios tested were: the first scenario, where the sub-areas were equipped with four IoRT devices, the second scenario where the sub-areas were equipped with eight IoRT devices, and the third scenario where the sub-areas were equipped with sixteen IoRT devices. Our proposed algorithm was evaluated against these three algorithms in terms of elapsed time during execution. The results showed that our algorithm consumed less elapsed time compared to the other three algorithms in all three scenarios tested. This indicates that our proposed algorithm is more efficient in terms of time consumption during execution than the other three algorithms.

Fig. 16
figure 16

Elapsed time in the run for four algorithms at different scenarios

9 Conclusion

In this research, a computation offloading strategy was presented to address the issue of IoRT devices consuming excessive amounts of energy for task computation while still fulfilling deadlines. Improving the task offloading ratio and transmission rate using an Improved Hybrid Optimization technique based on GA Combined with PSO named IHOGCP to reduce the total energy consumption. Based on the results of the simulations, it appears that the proposed technique can reduce IoRT devices’ energy consumption under time limitations. Furthermore, simulation is used to evaluate the convergence of the proposed approach. The simulation results show that the proposed algorithm is able to outperform GA and PSO methods separately.

In future works, keep working to improve the computation offloading model by applying it to more realistic settings. Additional approaches for offloading optimization will be conducted as well. Furthermore, examining the difficulty of offloading tasks in a dynamic moving environment for user devices with inter-task dependencies.