Skip to main content

Towards Efficient Edge Computing Through Adoption of Reinforcement Learning Strategies: A Review

  • Conference paper
  • First Online:
Data Management, Analytics and Innovation (ICDMAI 2022)

Part of the book series: Lecture Notes on Data Engineering and Communications Technologies ((LNDECT,volume 137))

Included in the following conference series:

  • 530 Accesses

Abstract

This paper contributes towards the mapping of the variants of Reinforcement Learning (RL) techniques to solve the key challenges of Edge Computing (EC) through broadly addressing task handling and Quality of Service (QoS) parameters. EC has bolstered ever since the advent of Industry 4.0 with computationally reliable heterogeneous mobile secured dynamic edge devices powered by an array of multifarious sensors designed on multi-edge hierarchical architectures found a strong footing on the backbone of ably equipped communication protocols to manifest their growth powered by the advent of 5G technology. However, with millions of such edge devices finding its way in a plethora of EC applications, with each having its own set of domain specific challenges, devising a suitable agent so as to sense the environment and learn from it has driven RL find its way as one of the significant tools to make the EC framework intelligent. Here we lay a good understanding of how RL has achieved noteworthy success to solve some of the pressing EC challenges, given that EC finds its use in settings of autonomous driving, content delivery, smart grid, healthcare applications and so on.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. J. Ren, H. Wang, T. Hou, S. Zheng, C. Tang, Collaborative edge computing and caching with deep reinforcement learning decision agents. IEEE Access 8, 120604–120612 (2020). https://doi.org/10.1109/ACCESS.2020.3007002

    Article  Google Scholar 

  2. H. Zhang, T. Yu, Taxonomy of reinforcement learning algorithms, in Deep Reinforcement Learning, ed. by H. Dong, Z. Ding, S. Zhang (Springer, Singapore, 2020). https://doi.org/10.1007/978-981-15-4095-0_3

  3. W. Shi, J. Cao, Q. Zhang, Y. Li, L. Xu, Edge computing: vision and challenges. IEEE Internet Things J. 3(5), 637–646 (2016)

    Article  Google Scholar 

  4. M. De Donno, K. Tange, N. Dragoni, Foundations and evolution of modern computing paradigms: cloud, IoT, edge, and fog. IEEE Access 7, 150936–150948 (2019). https://doi.org/10.1109/ACCESS.2019.2947652

    Article  Google Scholar 

  5. X. Qiu, L. Liu, W. Chen, Z. Hong, Z. Zheng, Online deep reinforcement learning for computation offloading in blockchain-empowered mobile edge computing. IEEE Trans. Veh. Technol. 68(8), 8050–8062 (2019). https://doi.org/10.1109/TVT.2019.2924015

    Article  Google Scholar 

  6. T.P. Lillicrap et al., Continuous Control with Deep Reinforcement Learning, Feb 2016. [Online]. Available: https://arxiv.org/abs/1509.02971

  7. Y. Zhan, S. Guo, P. Li, J. Zhang, A deep reinforcement learning based offloading game in edge computing. IEEE Trans. Comput. 69(6), 883–893 (2020). https://doi.org/10.1109/TC.2020.2969148

  8. X. Xiong, K. Zheng, L. Lei, L. Hou, Resource allocation based on deep reinforcement learning in IoT edge computing. IEEE J. Sel. Areas Commun. 38(6), 1133–1146 (2020). https://doi.org/10.1109/JSAC.2020.2986615

    Article  Google Scholar 

  9. N. Din, H. Chen, D. Khan, Mobility-aware resource allocation in multi-access edge computing using deep reinforcement learning, in 2019 IEEE International Conference on Parallel and Distributed Processing with Applications, Big Data and Cloud Computing, Sustainable Computing and Communications, Social Computing and Networking (ISPA/BDCloud/SocialCom/SustainCom) (Xiamen, China, 2019), pp. 202–209. https://doi.org/10.1109/ISPA-BDCloud-SustainCom-SocialCom48970.2019.00038

  10. X. Liu, J. Yu, Z. Feng, Y. Gao, Multi-agent reinforcement learning for resource allocation in IoT networks with edge computing. China Commun. 17(9), 220–236 (2020). https://doi.org/10.23919/JCC.2020.09.017

  11. C. Cho, S. Shin, H. Jeon, S. Yoon, QoS-aware workload distribution in hierarchical edge clouds: a reinforcement learning approach. IEEE Access 8, 193297–193313 (2020). https://doi.org/10.1109/ACCESS.2020.3033421

    Article  Google Scholar 

  12. M. Tang, V.W.S. Wong, Deep reinforcement learning for task offloading in mobile edge computing systems. IEEE Trans. Mob. Comput. https://doi.org/10.1109/TMC.2020.3036871

  13. S. Nath, J. Wu, Deep reinforcement learning for dynamic computation offloading and resource allocation in cache-assisted mobile edge computing systems. Intell. Converged Netw. 1(2), 181–198 (2020). https://doi.org/10.23919/ICN.2020.0014

  14. L. Ale, N. Zhang, X. Fang, X. Chen, S. Wu, L. Li, Delay-aware and energy-efficient computation offloading in mobile edge computing using deep reinforcement learning. IEEE Trans. Cogn. Commun. Netw. https://doi.org/10.1109/TCCN.2021.3066619

  15. F.D. Vita, D. Bruneo, A. Puliafito, G. Nardini, A. Virdis, G. Stea, A deep reinforcement learning approach for data migration in multi-access edge computing, in 2018 ITU Kaleidoscope: Machine Learning for a 5G Future (ITU K) (Santa Fe, Argentina, 2018), pp. 1–8. https://doi.org/10.23919/ITU-WT.2018.8597889

  16. R. Urimoto, Y. Fukushima, Y. Tarutani, T. Murase, T. Yokohira, A server migration method using Q-learning with dimension reduction in edge computing, in 2021 International Conference on Information Networking (ICOIN) (Jeju Island, South Korea, 2021), pp. 301–304. https://doi.org/10.1109/ICOIN50884.2021.9333965

  17. J. Wang, J. Hu, G. Min, A.Y. Zomaya, N. Georgalas, Fast adaptive task offloading in edge computing based on meta reinforcement learning. IEEE Trans. Parall. Distrib. Syst. 32(1), 242–253 (2021). https://doi.org/10.1109/TPDS.2020.3014896

  18. M. Yang et al., Deep reinforcement learning based green resource allocation mechanism in edge computing driven power internet of things, in 2020 International Wireless Communications and Mobile Computing (IWCMC) (Limassol, Cyprus, 2020), pp. 388–393. https://doi.org/10.1109/IWCMC48107.2020.9148169

  19. H. Lim, J. Kim, C. Kim, G. Hwang, H. Choi, Y. Han, Federated reinforcement learning for controlling multiple rotary inverted pendulums in edge computing environments, in 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC) (Fukuoka, Japan, 2020), pp. 463–464. https://doi.org/10.1109/ICAIIC48513.2020.9065233

  20. M. Lee, C.S. Hong, Service chaining offloading decision in the EdgeAI: a deep reinforcement learning approach, in 2020 21st Asia-Pacific Network Operations and Management Symposium (APNOMS) (Daegu, South Korea, 2020), pp. 393–396. https://doi.org/10.23919/APNOMS50412.2020.9237048

  21. J. Chen, S. Chen, Q. Wang, B. Cao, G. Feng, J. Hu, iRAF: a deep reinforcement learning approach for collaborative mobile edge computing IoT networks. IEEE Internet Things J. 6(4), 7011–7024 (2019). https://doi.org/10.1109/JIOT.2019.2913162

    Article  Google Scholar 

  22. T. Alfakih, M.M. Hassan, A. Gumaei, C. Savaglio, G. Fortino, Task offloading and resource allocation for mobile edge computing by deep reinforcement learning based on SARSA. IEEE Access 8, 54074–54084 (2020). https://doi.org/10.1109/ACCESS.2020.2981434

    Article  Google Scholar 

  23. S. Park, Y. Kang, Y. Tian, J. Kim, Fast and reliable offloading via deep reinforcement learning for mobile edge video computing, in 2020 International Conference on Information Networking (ICOIN) (Barcelona, Spain, 2020), pp. 10–12. https://doi.org/10.1109/ICOIN48656.2020.9016591

  24. X. Chen, G. Liu, Joint optimization of task offloading and resource allocation via deep reinforcement learning for augmented reality in mobile edge network, in 2020 IEEE International Conference on Edge Computing (EDGE) (Beijing, China, 2020), pp. 76–82. https://doi.org/10.1109/EDGE50951.2020.00019

  25. X. Chen, G. Liu, Energy-efficient task offloading and resource allocation via deep reinforcement learning for augmented reality in mobile edge networks. IEEE Internet Things J. https://doi.org/10.1109/JIOT.2021.3050804

  26. I. Khan, X. Tao, G.M.S. Rahman, W.U. Rehman, T. Salam, Advanced energy-efficient computation offloading using deep reinforcement learning in MTC edge computing. IEEE Access 8, 82867–82875 (2020). https://doi.org/10.1109/ACCESS.2020.2991057

    Article  Google Scholar 

  27. N. Khumalo, O. Oyerinde, L. Mfupe, Reinforcement learning-based computation resource allocation scheme for 5G fog-radio access network, in 2020 Fifth International Conference on Fog and Mobile Edge Computing (FMEC) (Paris, France, 2020), pp. 353–355. https://doi.org/10.1109/FMEC49853.2020.9144787

  28. J. Zou, T. Hao, C. Yu, H. Jin, A3C-DO: a regional resource scheduling framework based on deep reinforcement learning in edge scenario. IEEE Trans. Comput. 70(2), 228–239 (2021). https://doi.org/10.1109/TC.2020.2987567

  29. L. Xiao, X. Lu, T. Xu, X. Wan, W. Ji, Y. Zhang, Reinforcement learning-based mobile offloading for edge computing against jamming and interference. IEEE Trans. Commun. 68(10), 6114–6126 (2020). https://doi.org/10.1109/TCOMM.2020.3007742

    Article  Google Scholar 

  30. Z. Cao, P. Zhou, R. Li, S. Huang, D. Wu, Multiagent deep reinforcement learning for joint multichannel access and task offloading of mobile-edge computing in industry 4.0. IEEE Internet Things J. 7(7), 6201–6213 (2020). https://doi.org/10.1109/JIOT.2020.2968951

    Article  Google Scholar 

  31. Y. Li, F. Qi, Z. Wang, X. Yu, S. Shao, Distributed edge computing offloading algorithm based on deep reinforcement learning. IEEE Access 8, 85204–85215 (2020). https://doi.org/10.1109/ACCESS.2020.2991773

    Article  Google Scholar 

  32. Q. Liu, L. Cheng, T. Ozcelebi, J. Murphy, J. Lukkien, Deep reinforcement learning for IoT network dynamic clustering in edge computing, in 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID) (Larnaca, Cyprus, 2019), pp. 600–603. https://doi.org/10.1109/CCGRID.2019.00077

  33. Q. Guo, R. Huo, H. Meng, E. Xinhua, J. Liu, T. Huang, Research on reinforcement learning-based dynamic power management for edge data center, in 2018 IEEE 9th International Conference on Software Engineering and Service Science (ICSESS) (Beijing, China, 2018), pp. 865–868. https://doi.org/10.1109/ICSESS.2018.8663880

  34. T. Yang, Y. Hu, M.C. Gursoy, A. Schmeink, R. Mathar, Deep reinforcement learning based resource allocation in low latency edge computing networks, in 2018 15th International Symposium on Wireless Communication Systems (ISWCS) (Lisbon, Portugal, 2018), pp. 1–5. https://doi.org/10.1109/ISWCS.2018.8491089

  35. Y. Liu, H. Yu, S. Xie, Y. Zhang, Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks. IEEE Trans. Veh. Technol. 68(11), 11158–11168 (2019). https://doi.org/10.1109/TVT.2019.2935450

    Article  Google Scholar 

  36. K. Wang, X. Wang, X. Liu, A. Jolfaei, Task offloading strategy based on reinforcement learning computing in edge computing architecture of internet of vehicles. IEEE Access 8, 173779–173789 (2020). https://doi.org/10.1109/ACCESS.2020.3023939

    Article  Google Scholar 

  37. Z. Xue et al., A resource-constrained and privacy-preserving edge computing enabled clinical decision system: a federated reinforcement learning approach. IEEE Int. Things J. https://doi.org/10.1109/JIOT.2021.3057653

  38. K. Kim, Y. Hong, Industrial general reinforcement learning control framework system based on intelligent edge, in 2020 22nd International Conference on Advanced Communication Technology (ICACT) (Phoenix Park, South Korea, 2020), pp. 414–418. https://doi.org/10.23919/ICACT48636.2020.9061542

  39. F. Xu, F. Yang, C. Zhao, S. Wu, Deep reinforcement learning based joint edge resource management in maritime network. China Commun. 17(5), 211–222 (2020). https://doi.org/10.23919/JCC.2020.05.016

  40. L. Gu, D. Zeng, W. Li, S. Guo, A. Zomaya, H. Jin, Deep reinforcement learning based VNF management in geo-distributed edge computing, in 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS) (Dallas, TX, USA, 2019), pp. 934–943. https://doi.org/10.1109/ICDCS.2019.00097

  41. K. Zhang, J. Cao, H. Liu, S. Maharjan, Y. Zhang, Deep reinforcement learning for social-aware edge computing and caching in urban informatics. IEEE Trans. Industr. Inf. 16(8), 5467–5477 (2020). https://doi.org/10.1109/TII.2019.2953189

    Article  Google Scholar 

  42. B. Hu, J. Li, An edge computing framework for powertrain control system optimization of intelligent and connected vehicles based on curiosity-driven deep reinforcement learning. IEEE Trans. Ind. Electron. https://doi.org/10.1109/TIE.2020.3007100

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aritra Ray .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ray, A., Chakrabarti, A. (2023). Towards Efficient Edge Computing Through Adoption of Reinforcement Learning Strategies: A Review. In: Goswami, S., Barara, I.S., Goje, A., Mohan, C., Bruckstein, A.M. (eds) Data Management, Analytics and Innovation. ICDMAI 2022. Lecture Notes on Data Engineering and Communications Technologies, vol 137. Springer, Singapore. https://doi.org/10.1007/978-981-19-2600-6_17

Download citation

Publish with us

Policies and ethics