Abstract
Because the traditional computing model can no longer meet the particularity of Internet of Vehicles tasks, aiming at its characteristics of high bandwidth, low latency and high reliability, this paper proposes a resource allocation strategy for Internet of Vehicles using reinforcement learning in edge cloud computing environment. First, a multi-layer resource allocation model for Internet of Vehicles is proposed, which uses the cooperation mode of edge cloud computing servers and roadside units to dynamically coordinate edge computing and content caching. Then, based on the construction of communication model, calculation model and cache model, make full use of idle resources in Internet of Vehicles to minimize network delay under the condition of limited energy consumption. Finally, the optimization goal is solved by two-layer deep Q network model, and the best resource allocation plan is obtained. The simulation results based on the Internet of Vehicles model show that the computational energy consumption and system delay of proposed strategy do not exceed 400 J and 600 ms, respectively. Besides, the overall effect of resource allocation is better than other comparison strategies and it has certain application prospects.
This is a preview of subscription content, access via your institution.








Data availability
The data contained in this article are not subject to any restrictions.
References
Alfakih T, Hassan MM, Gumaei A, Savaglio C, Fortino G (2020) Task offloading and resource allocation for mobile edge computing by deep reinforcement learning based on SARSA. IEEE Access 8(7):54074–54084
Bi R, Liu Q, Ren J, Tan GZ (2021) Utility aware offloading for mobile-edge computing. Tsinghua Sci Technol 26(2):239–250
Chen H, Zhao T, Li C, Guo Y (2019) Green internet of vehicles: architecture, enabling technologies, and applications. IEEE Access 7(8):179185–179198
Dai Y, Xu D, Maharjan S, Qiao GH, Zhang Y (2019) Artificial intelligence empowered edge computing and caching for internet of vehicles. IEEE Wirel Commun 26(3):12–18
He XF, Jin RC, Dai HY (2019) Peace: privacy-preserving and cost-efficient task offloading for mobile-edge computing. IEEE Trans Wirel Commun 19(3):1814–1824
Ji B, Zhang X, Mumtaz S, Han C, Li C, Wen H, Wang D (2020) Survey on the internet of vehicles: network architectures and applications. IEEE Commun Stand Mag 4(1):34–41
Jiang YL, Chen YS, Yang SW, Wu CH (2019) Energy-efficient task offloading for time-sensitive applications in fog computing. IEEE Syst J 13(3):2930–2941
Khayyat M, Elgendy IA, Muthanna A (2020) Advanced deep learning-based computational offloading for multilevel vehicular edge-cloud computing networks. IEEE Access 8:137052–137062
Kwon D, Kim J, Mohaisen DA, Lee W (2020) Self-adaptive power control with deep reinforcement learning for millimeter-wave Internet-of-vehicles video caching. J Commun Netw 22(4):326–337
Li Y, Jiang C (2020) Distributed task offloading strategy to low load base stations in mobile edge computing environment. Comput Commun 164(8):240–248
Li SL, Zhai D, Pengfei DU, Han T (2019) Energy-efficient task offloading, load balancing, and resource allocation in MEC enabled IoT networks. Sci China Inf Sci 62(2):1–3
Mahini H, Rahmani AM, Mousavirad SM (2021) An evolutionary game approach to IoT task offloading in fog-cloud computing. J Supercomput 77(6):5398–5425
Miao Y, Wu G, Li M, Ghoneim A, Al-Rakhami M, Hossain MS (2020) Intelligent task prediction and computation offloading based on mobile-edge cloud computing. Futur Gener Comput Syst 102(3):925–931
Morocho-Cayamcela ME, Lim W (2020) Expanding the coverage of multihop V2V with DCNNs and Q-learning. J Korean Inst Commun Inf Sci 45(3):622–627
Mukherjee M, Kumar S, Mavromoustakis CX, Mastorakis G, Matam R, Kumar V, Zhang Q (2020) Latency-driven parallel task data offloading in fog computing networks for industrial applications. IEEE Trans Ind Inf 16(9):6050–6058
Qi Q, Zhang L, Wang J, Sun H, Yu FR (2020) Scalable parallel task scheduling for autonomous driving using multi-task deep reinforcement learning. IEEE Trans Veh Technol 69(11):13861–13874
Sun Y, Guo X, Song J, Zhou S, Jiang Z, Liu X, Niu Z (2019) Adaptive learning-based task offloading for vehicular edge computing systems. IEEE Trans Veh Technol 68(4):3061–3074
Wang R, Cao Y, Noor A, Alamoudi TA, Nour R (2020a) Agent-enabled task offloading in UAV-aided mobile edge computing. Comput Commun 149(6):324–331
Wang K, Wang X, Liu X, Jolfaei A (2020b) Task offloading strategy based on reinforcement learning computing in edge computing architecture of internet of vehicles. IEEE Access 8(2):173779–173789
Wang J, Hu J, Min G, Zomaya AY, Georgalas N (2020c) Fast adaptive task offloading in edge computing based on meta reinforcement learning. IEEE Trans Parallel Distrib Syst 32(1):242–253
Wang Z, Li P, Shen S, Yang K (2021) Task offloading scheduling in mobile edge computing networks. Procedia Comput Sci 184(4):322–329
Wu Q, Ge H, Liu H, Fan Q, Li Z, Wang Z (2020) A task offloading scheme in vehicular fog and cloud computing system. IEEE Access 8(7):1173–1184
Xu X, Xue Y, Qi L, Yuan Y, Zhang X, Umer T, Wan S (2019) An edge computing-enabled computation offloading method with privacy preservation for internet of connected vehicles. Futur Gener Comput Syst 96(07):89–100
Yang L, Zhong CY, Yang QH, Zou WR, Fathallac A (2020) Task offloading for directed acyclic graph applications based on edge computing in Industrial Internet—ScienceDirect. Inf Sci 540(3):51–68
Ye T, Lin X, Wu J, Li G, Li J (2020) Processing capability and QoE driven optimized computation offloading scheme in vehicular fog based F-RAN. World Wide Web 23(6):1–19
Zhang D, Zheng L, Chen Q, Wei B, Ma X (2019) A power allocation-based overlapping transmission scheme in internet of vehicles. Internet Things J IEEE 6(1):50–59
Funding
This work has not been supported by any projects.
Author information
Authors and Affiliations
Contributions
Majority of work, such as methodology, software, conceptualization, validation, investigation, data curation, writing original draft, writing review and editing, visualization, project administration was completed by YL. QT and ZL participate in research work include methodology, writing review and editing, conceptualization, investigation, validation.
Corresponding author
Ethics declarations
Conflict of interest
The authors of this article declares that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Li, Y., Liu, Z. & Tao, Q. A resource allocation strategy for internet of vehicles using reinforcement learning in edge computing environment. Soft Comput 27, 3999–4009 (2023). https://doi.org/10.1007/s00500-022-07544-4
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-022-07544-4
Keywords
- Edge cloud computing
- Reinforcement learning
- Internet of vehicles
- Resource allocation strategy
- Double deep Q network model
- Network delay
- Computing energy consumption