Skip to main content
Log in

Mutated Deep Reinforcement Learning Scheduling in Cloud for Resource-Intensive IoT Systems

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

Cloud computing has indisputably emerged as the primary computing and storage platform for various contemporary workloads. These workloads, spanning from IoT to big data analytics and processing tasks, generate a vast number of daily tasks that require efficient mapping onto cloud resources. Consequently, developing a suitable task scheduling mechanism that minimizes execution delay and necessitates effective mapping onto cloud resources for efficient execution. To tackle this challenge, the study presents hybrid optimized Deep Reinforcement Learning-based scheduling approaches to efficiently manage huge workloads on cloud resources. These methods aim to minimize task waiting time and resource consumption, leading to improved performance in cloud computing environments. The proposed hybrid metaheuristic methods, which synergizes Particle Swarm Optimization, Firefly Algorithm, and Tabu Search with a mutation approach, is designed to improve task scheduling in IoT systems with resource-intensive requirements. By leveraging the strengths of these algorithms, the approach seeks to optimize resource utilization and enhance the efficiency of task scheduling in complex IoT environments. Simulation results demonstrate the effectiveness of the proposed hybrid methods compared to other approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data Availability

Not applicable.

Code Availability

Not applicable.

References

  1. Rjoub, G., Bentahar, J., Abdel Wahab, O., & Saleh Bataineh, A. (2021). Deep and reinforcement learning for automated task scheduling in large-scale cloud computing systems. Concurrency and Computation: Practice and Experience, 33(23), e5919.

    Article  Google Scholar 

  2. Pal, S., Jhanjhi, N. Z., Abdulbaqi, A. S., Akila, D., Alsubaei, F. S., & Almazroi, A. A. (2023). An intelligent task scheduling model for hybrid internet of things and cloud environment for big data applications. Sustainability, 15(6), 5104.

    Article  Google Scholar 

  3. Chen, W., Qiu, X., Cai, T., Dai, H. N., Zheng, Z., & Zhang, Y. (2021). Deep reinforcement learning for Internet of Things: A comprehensive survey. IEEE Communications Surveys & Tutorials, 23(3), 1659–1692.

    Article  Google Scholar 

  4. Onwubolu, G. C., & Babu, B. V. (2013). New optimization techniques in engineering (Vol. 141). Springer.

    MATH  Google Scholar 

  5. Zhu, J., Song, Y., Jiang, D., & Song, H. (2017). A new deep-Q-learning-based transmission scheduling mechanism for the cognitive Internet of Things. IEEE Internet of Things Journal, 5(4), 2375–2385.

    Article  Google Scholar 

  6. Xu, F., Yang, F., Bao, S., & Zhao, C. (2019). DQN inspired joint computing and caching resource allocation approach for software defined information-centric Internet of Things network. IEEE Access, 7, 61987–61996.

    Article  Google Scholar 

  7. Alhartomi, M. (2023). New reward-clipping mechanism in deep-learning enabled internet of things in 6G to improve intelligent transmission scheduling. In 2023 IEEE 13th annual computing and communication workshop and conference (CCWC) (pp. 1236–1242). IEEE.

  8. Shah, H. A., & Zhao, L. (2020). Multiagent deep-reinforcement-learning-based virtual resource allocation through network function virtualization in Internet of Things. IEEE Internet of Things Journal, 8(5), 3410–3421.

    Article  Google Scholar 

  9. Liang, F., Yu, W., Liu, X., Griffith, D., & Golmie, N. (2021). Toward deep Q-network-based resource allocation in industrial internet of things. IEEE Internet of Things Journal, 9(12), 9138–9150.

    Article  Google Scholar 

  10. Salh, A., Ngah, R., Hussain, G. A., Audah, L., Alhartomi, M., Abdullah, Q., Alsulami, R., Alzahrani, S., & Alzahmi, A. (2022). Intelligent resource management using multiagent double deep Q-networks to guarantee strict reliability and low latency in IoT network. IEEE Open Journal of the Communications Society, 3, 2245–2257.

    Article  Google Scholar 

  11. Cheng, W., Liu, X., Wang, X., & Nie, G. (2022). Task offloading and resource allocation for industrial internet of things: A double-dueling deep Q-network approach. IEEE Access, 10, 103111–103120.

    Article  Google Scholar 

  12. Saranya, N., Geetha, K., & Rajan, C. (2020). Data replication in mobile edge computing systems to reduce latency in internet of things. Wireless Personal Communications, 112(4), 2643–2662.

    Article  Google Scholar 

  13. Zhao, X., & Wang, G. (2023). Deep Q networks-based optimization of emergency resource scheduling for urban public health events. Neural Computing and Applications, 35(12), 8823–8832.

    Google Scholar 

  14. Ge, Y., Wang, A., Zhao, Z., & Ye, J. (2019). A Tabu-genetic hybrid search algorithm for job-shop scheduling problem. In E3S web of conferences (Vol. 95, p. 04007). EDP Sciences.

  15. Zhang, T., Zhang, Y. J., Zheng, Q. P., & Pardalos, P. M. (2011). A hybrid particle swarm optimization and tabu search algorithm for order planning problems of steel factories based on the make-to-stock and make-to-order management architecture. Journal of Industrial and Management Optimization, 7(1), 31.

    Article  MathSciNet  MATH  Google Scholar 

  16. Wang, Y. X., Xiang, Q. L., & Zhao, Z. D. (2010). Particle swarm optimizer with adaptive tabu and mutation: A unified framework for efficient mutation operators. ACM Transactions on Autonomous and Adaptive Systems (TAAS), 5(1), 1–27.

    Article  Google Scholar 

  17. Kora, P., & Krishna, K. S. R. (2016). Hybrid firefly and particle swarm optimization algorithm for the detection of bundle branch block. International Journal of the Cardiovascular Academy, 2(1), 44–48.

    Article  Google Scholar 

  18. Suganya, E., & Rajan, C. (2021). An adaboost-modified classifier using particle swarm optimization and stochastic diffusion search in wireless IoT networks. Wireless Networks (10220038), 27(4), 2287–2299.

    Article  Google Scholar 

  19. Ezzeldin, R., Zelenakova, M., Abd-Elhamid, H. F., Pietrucha-Urbanik, K., & Elabd, S. (2023). Hybrid optimization algorithms of firefly with GA and PSO for the optimal design of water distribution networks. Water, 15(10), 1906.

    Article  Google Scholar 

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harshala Shingne.

Ethics declarations

Conflict of interest

The author declared that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shingne, H., Shriram, R. Mutated Deep Reinforcement Learning Scheduling in Cloud for Resource-Intensive IoT Systems. Wireless Pers Commun 132, 2143–2155 (2023). https://doi.org/10.1007/s11277-023-10709-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-023-10709-5

Keywords

Navigation