Skip to main content
Log in

Deep reinforcement learning-based joint optimization model for vehicular task offloading and resource allocation

  • Published:
Peer-to-Peer Networking and Applications Aims and scope Submit manuscript

Abstract

With the rapid advancement of Internet of vehicles and autonomous driving technology, there is a growing need for increased computing power in vehicle operations. However, the strict latency requirements of vehicle tasks may pose challenges to communication and computing resources within the vehicle edge computing network. This paper introduces a two-stage joint optimization to address challenges, focusing on minimizing vehicle task latency and optimizing resource allocation. In addition, the task completion rate is considered an important indicator to ensure safety and reliability in practical application scenarios. Next, we propose a global adaptive offloading and resource allocation optimization model named GOAL. The GOAL model dynamically adjusts the weight coefficients of the reward function to optimize the model, integrating the actor-critic algorithm to effectively adapt to uncertain environments. Through experimental comparisons of various weight coefficients for task arrival rates and reward functions, we were able to determine the optimal hyperparameters for our proposed model. The simulation results show that the GOAL model outperforms the benchmark methods by over 30% in reward value. It also performs better in terms of task delay and energy consumption. Additionally, the GOAL model has a higher task completion rate compared to the benchmark methods, and it exhibits strong search capabilities and faster convergence speed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data availability

No datasets were generated or analysed during the current study.

References

  1. Kojima F, Matsumura T (2021) NICT’S R &D activities on the future terrestrial wireless communication systems toward B5G/6G by harmonizing requirements with environments. 2021 IEEE VTS 17th Asia Pacific Wireless Communications Symposium (APWCS). IEEE, pp 1–5

    Google Scholar 

  2. Shen F, Shi H, Yang Y (2021) A comprehensive study of 5G and 6G networks. 2021 International Conference on Wireless Communications and Smart Grid (ICWCSG). IEEE, pp 321–326

    Chapter  Google Scholar 

  3. Ahmed M, Raza S, Mirza MA, Aziz A, Khan MA, Khan WU, Li J, Han Z (2022) A survey on vehicular task offloading: classification, issues, and challenges. J King Saud Univ Comput Inf Sci 34:4135–4162

    Google Scholar 

  4. Zeng F, Rou R, Deng Q, Wu J (2023) Parked vehicles crowdsourcing for task offloading in vehicular edge computing. Peer Peer Netw Appl 16(4):1803–1818

  5. Jiang L, Chang X, Mišić J, Mišić VB, Bai J (2022) Understanding MEC empowered vehicle task offloading performance in 6G networks. Peer Peer Netw Appl 15(2):1090–1104

    Article  Google Scholar 

  6. Fan W, Su Y, Liu J, Li S, Huang W, Wu F, Liu Y (2023) Joint task offloading and resource allocation for vehicular edge computing based on V2I and V2V modes. IEEE Trans Intell Transp Syst 24:4277–4292

    Article  Google Scholar 

  7. Hou Y, Wang C, Zhu M, Xu X, Tao X, Wu X (2021) Joint allocation of wireless resource and computing capability in MEC-enabled vehicular network. China Commun 18(6):64–76

    Article  Google Scholar 

  8. Liu Y, Yu H, Xie S, Zhang Y (2019) Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks. IEEE Trans Veh Technol 68:11158–11168

    Article  Google Scholar 

  9. Kiran BR, Sobh I, Talpaert V, Mannion P, Al Sallab AA, Yogamani S, Pérez P (2021) Deep reinforcement learning for autonomous driving: a survey. IEEE Trans Intell Transp Syst 23(6):4909–4926

    Article  Google Scholar 

  10. Lu S, Shi W (2023) Vehicle as a mobile computing platform: opportunities and challenges. IEEE Network, 99:1–1. https://doi.org/10.1109/MNET.2023.3319454

  11. Dastjerdi AV, Buyya R (2016) Fog computing: helping the internet of things realize its potential. Computer 49(8):112–116

    Article  Google Scholar 

  12. Liu Y, Wang S, Huang J, Yang F (2018) A computation offloading algorithm based on game theory for vehicular edge networks. 2018 IEEE International Conference on Communications (ICC). IEEE, pp 1–6

    Google Scholar 

  13. Du J, Yu FR, Chu X, Feng J, Lu G (2018) Computation offloading and resource allocation in vehicular networks based on dual-side cost minimization. IEEE Trans Veh Technol 68(2):1079–1092

    Article  Google Scholar 

  14. Zhou Z, Liu P, Chang Z, Xu C, Zhang Y (2018) Energy-efficient workload offloading and power control in vehicular edge computing. 2018 IEEE Wireless Communications and Networking Conference Workshops (WCNCW). IEEE, pp 191–196

    Chapter  Google Scholar 

  15. Xu Y, Zhou W, Zhang Y-G, Yu G (2022) Stochastic game for resource management in cellular zero-touch deterministic industrial M2M networks. IEEE Wirel Commun Lett 11:2635–2639

    Article  Google Scholar 

  16. Xu Y, Li J, Zhou W, Chen C (2023) Learning-empowered resource allocation for air slicing in UAV-assisted cellular V2X communications. IEEE Syst J 17:1008–1011

    Article  Google Scholar 

  17. Zhou W, Lin C, Duan J, Ren K, Zhang X, Dou W (2021) An optimized greedy-based task offloading method for mobile edge computing. In: Proceedings of the 21st international conference on algorithms and architectures for parallel processing, ICA3PP 2021, virtual event, part I. Springer International Publishing, pp 494–508

  18. Cong Y, Xue K, Wang C, Sun W, Sun S, Hu F (2023) Latency-energy joint optimization for task offloading and resource allocation in MEC-assisted vehicular networks. IEEE Trans Veh Technol 72(12):16369–16381

    Article  Google Scholar 

  19. Mlika Z, Cherkaoui S (2021) Network slicing with MEC and deep reinforcement learning for the internet of vehicles. IEEE Network 35(3):132–138

    Article  Google Scholar 

  20. Li S, Hu X, Du Y (2021) Deep reinforcement learning and game theory for computation offloading in dynamic edge computing markets. IEEE Access 9:121456–121466

    Article  Google Scholar 

  21. Wang Y, Chen X, Chen Y, Du S (2021) Resource allocation algorithm for MEC based on deep reinforcement learning. 2021 IEEE International Performance, Computing, and Communications Conference (IPCCC). IEEE, pp 1–6

    Google Scholar 

  22. Wu S, Xia W, Cui W, Chao Q, Lan Z, Yan F, Shen L (2018) An efficient offloading algorithm based on support vector machine for mobile edge computing in vehicular networks. 2018 10th International Conference on Wireless Communications and Signal Processing (WCSP). IEEE, pp 1–6

    Google Scholar 

  23. Ning Z, Dong P, Kong X, Xia F (2018) A cooperative partial computation offloading scheme for mobile edge computing enabled internet of things. IEEE Internet Things J 6(3):4804–4814

    Article  Google Scholar 

  24. Kaloev M, Krastev G (2021) Experiments focused on exploration in deep reinforcement learning. 2021 5th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT). IEEE, pp 351–355

    Chapter  Google Scholar 

  25. Ke H, Wang J, Deng L, Ge Y, Wang H (2020) Deep reinforcement learning-based adaptive computation offloading for MEC in heterogeneous vehicular networks. IEEE Trans Veh Technol 69(7):7916–7929

    Article  Google Scholar 

  26. Li S, Hu X, Du Y (2021) Deep reinforcement learning for computation offloading and resource allocation in unmanned-aerial-vehicle assisted edge computing. Sensors (Basel, Switzerland) 21(19):6499

    Article  Google Scholar 

  27. Saglam B, Mutlu FB, Dalmaz O, Kozat SS (2022) Unified intrinsically motivated exploration for off-policy learning in continuous action spaces. 2022 30th Signal Processing and Communications Applications Conference (SIU). IEEE, pp 1–4

    Google Scholar 

  28. Li H, Xu H, Zhou C, Lü X, Han Z (2020) Joint optimization strategy of computation offloading and resource allocation in multi-access edge computing environment. IEEE Trans Veh Technol 69(9):10214–10226

    Article  Google Scholar 

Download references

Funding

This research was funded by Natural Science Foundation of Jiangsu Province grant number BK20201415.

Author information

Authors and Affiliations

Authors

Contributions

Zhiyuan Li wrote the main manuscript text and Zeng-Xiang Zhang wrote the simulation experimental codes. All authors reviewed the manuscript.

Corresponding author

Correspondence to Zhi-Yuan Li.

Ethics declarations

Ethics approval

Not applicable.

Consent for publication

We agree to publish on the Journal of Peer-to-Peer Networking and Applications.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection: 1- Track on Networking and Applications

Guest Editor: Vojislav B. Misic

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, ZY., Zhang, ZX. Deep reinforcement learning-based joint optimization model for vehicular task offloading and resource allocation. Peer-to-Peer Netw. Appl. (2024). https://doi.org/10.1007/s12083-024-01693-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12083-024-01693-z

Keywords

Navigation