Skip to main content
Log in

Learn with Curiosity: A Hybrid Reinforcement Learning Approach for Resource Allocation for 6G enabled Connected Cars

  • Published:
Mobile Networks and Applications Aims and scope Submit manuscript

Abstract

Due to the rapid expansion of heterogeneous mobile networks, there has been a significant increase in the need for a network, processing, and caching resources. A dynamic vehicular network requires the ability to effectively and efficiently manage multiple resources. This is in accord with the majority of previous research on resource management. Due to this, it is imperative to focus on dynamic networks that change rapidly over time and require a large number of connections. Using curiosity-enabled learning, this study investigates how distributed resource allocation mechanisms for mobile vehicle-to-infrastructure (V2I) communications can be applied to medium and dense traffic scenarios. By utilizing the distributed resource allocation mechanism, an autonomous “agent” such as a vehicle can determine the optimal sub-band and power level for transmission without relying on global information. It is believed that the environment learns from extrinsic rewards because they can provide high levels of repetition and heavy fluctuations in decisions. We model curiosity in accordance with exit locations in mobility models. In addition to providing intrinsic rewards to the environment, a variety of curiosities will improve the performance of the system. We find that each agent can satisfy latency constraints with resource allocation for V2I links with a minimum prediction error when learning from intrinsic rewards.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Chen X, Wu C, Chen T, Zhang H, Liu Z, Zhang Y, Bennis M (2020) Age of information aware radio resource management in vehicular networks: A proactive deep reinforcement learning perspective. IEEE Trans Wireless Commun 19(4):2268–2281

    Article  Google Scholar 

  2. Zhou H, Xu W, Bi Y, Chen J, Yu Q, Shen XS (2017) Toward 5g spectrum sharing for immersive-experience-driven vehicular communications. IEEE Wireless Commun 24(6):30–37

    Article  Google Scholar 

  3. Vegni AM, Agrawal DP (2016) Cognitive Vehicular Networks. CRC Press Inc, Boca Raton, FL, USA

    Book  Google Scholar 

  4. Liang L, Peng H, Li GY, Shen X (2017) Vehicular communications: A physical layer perspective. IEEE Trans Veh Technol 66(12):10 647-10 659

    Article  Google Scholar 

  5. Ye H, Li GY, Juang BHF (2019) Deep reinforcement learning based resource allocation for V2V communications. IEEE Trans Veh Technol 68(4):3163–3173

    Article  Google Scholar 

  6. Ning Z, Dong P, Wang X, Obaidat MS, Hu X, Guo L, Guo Y, Huang J, Hu B, Li Y (2019) When deep reinforcement learning meets 5g-enabled vehicular networks: A distributed offloading framework for traffic big data. IEEE Trans Ind Informat 16(2):1352–1361

    Article  Google Scholar 

  7. Gündoğan A, Gürsu HM, Pauli V, Kellerer W (2020) Distributed resource allocation with multi-agent deep reinforcement learning for 5g-v2v communication,” in Proc. of the Twenty-First International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing:357–362

  8. Burda Y, Edwards H, Pathak D, Storkey A, Darrell T, Efros AA (2018) Large-scale study of curiosity-driven learning, arXiv preprint arXiv:1808.04355

  9. Pathak D, Agrawal P, Efros AA, Darrell T (2017) Curiosity-driven exploration by self-supervised prediction. in Proc: of IEEE Conference on Computer Vision and Pattern Recognition Workshops, 16–17

  10. Rounds J (2004) Strategies for the curiosity-driven museum visitor. Cur Mus J 47(4):389–412

  11. Oudeyer PY, Smith LB (2016) How evolution may work through curiosity-driven developmental process. Topics in Cognitive Science 8(2):492–502

  12. Liu CH, Zhao Y, Dai Z, Yuan Y, Wang G, Wu D, Leung KK (2020) Curiosity-driven energy-efficient worker scheduling in vehicular crowdsourcing: A deep reinforcement learning approach, in Proc: of IEEE 36th International Conference on Data Engineering (ICDE), 25–36

  13. Huang F, Li W, Cui J, Fu Y, Li X (2022) Unified curiosity-driven learning with smoothed intrinsic reward estimation. Pattern Recognition 123:108352

    Article  Google Scholar 

  14. He Y, Wang Y, Lin Q, Li J (2022) Meta-hierarchical reinforcement learning (mhrl)-based dynamic resource allocation for dynamic vehicular networks, IEEE Trans Veh Technol

  15. Liang L, Ye H, Li GY (2019) Spectrum sharing in vehicular networks based on multi-agent reinforcement learning. IEEE J Selec Areas Comm 37(10):2282–2292

    Article  Google Scholar 

  16. Peng H, Shen X (2020) Deep reinforcement learning based resource management for multi-access edge computing in vehicular networks. IEEE Trans Netw Sc Eng 7(4):2416–2428

    Article  MathSciNet  Google Scholar 

  17. Wang Y, Shang F, Lei J, Zhu X, Qin H, Wen J (2023) Dual-attention assisted deep reinforcement learning algorithm for energy-efficient resource allocation in industrial internet of things. Future Generation Computer Systems 142:150–164

    Article  Google Scholar 

  18. Aghapour Z, Sharifian S, Taheri H (2023) Task offloading and resource allocation algorithm based on deep reinforcement learning for distributed ai execution tasks in iot edge computing environments, Com Netw:109577

  19. Qadeer A, Lee MJ (2023) Hrl-edge-cloud: Multi-resource allocation in edge-cloud based smart-streetscape system using heuristic reinforcement learning, Inf Syst Front:1–17

  20. Xiao Y, Song Y, Liu J (2023) Multi-agent deep reinforcement learning based resource allocation for ultra-reliable low-latency internet of controllable things, IEEE Trans Wireless Comm

  21. Luo J, Chen Q, Tang L, Zhang Z, Li Y (2023) Adaptive resource allocation considering power-consumption outage: A deep reinforcement learning approach, IEEE Trans Veh Technol

  22. Li Y, Zhang X, Zeng T, Duan J, Wu C, Wu D, Chen X (2023) Task placement and resource allocation for edge machine learning: A gnn-based multi-agent reinforcement learning paradigm. arXiv preprint arXiv:2302.00571

  23. López-Benítez M, Casadevall F (2013) Time-dimension models of spectrum usage for the analysis, design, and simulation of cognitive radio networks. IEEE Trans Veh Technol 62(5):2091–2104

    Article  Google Scholar 

  24. Rawat DB, Alsabet R, Bajracharya C, Song M (2018) On the performance of cognitive internet-of-vehicles with unlicensed user-mobility and licensed user-activity. Comp Netw 137:98–106

    Article  Google Scholar 

  25. Behrisch M, Bieker L, Erdmann J, and Krajzewicz D (2011) Sumo–simulation of urban mobility: an overview,” in Proc. of The Third International Conference on Advances in System Simulation (SIMUL). Think Mind

  26. Boban M, Barros J, Tonguz OK (2014) Geometry-based vehicle-to-vehicle channel modeling for large-scale simulation. IEEE Trans Veh Technol 63(9):4146–4164

Download references

Acknowledgements

The author would like to thank the Editor and reviewers for their insightful comments which helps significantly to improve the quality of work. The author would like to thank Smt. Chandaben Mohanbhai Patel Institute of Computer Applications, Charotar University of Science and Technology (CHARUSAT) for providing the resources to carry the cutting edge research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sagar Kavaiya.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kavaiya, S. Learn with Curiosity: A Hybrid Reinforcement Learning Approach for Resource Allocation for 6G enabled Connected Cars. Mobile Netw Appl 28, 1176–1186 (2023). https://doi.org/10.1007/s11036-023-02126-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11036-023-02126-6

Keywords

Navigation