Skip to main content

Deep Reinforcement Learning for Autonomous Mobile Networks in Micro-grids

  • 827 Accesses

Part of the Studies in Computational Intelligence book series (SCI,volume 984)

Abstract

In this chapter, we describe the design of controlling schemes for energy self-sustainable mobile networks through Deep Learning. The goal is to enable an intelligent energy management that allows the base stations to mostly operate off-grid by using renewable energies. To achieve this goal, we formulate an on-line grid energy and network throughput optimization problem considering both centralized and distributed Deep Reinforcement Learning implementations. We provide an exhaustive discussion on the reference scenario, the techniques adopted, the achieved performance, the complexity and the feasibility of the proposed models, together with the energy and cost savings attained. Results demonstrate that Deep Q-Learning based algorithms represent a viable and economically convenient solution for enabling energy self-sustainability of mobile networks grouped in micro-grids.

Keywords

  • Energy sustainability
  • Energy efficiency
  • Mobile networks
  • Edge computing
  • Energy harvesting
  • Deep reinforcement learning
  • Deep learning

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-77939-9_8
  • Chapter length: 50 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   189.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-77939-9
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   169.99
Price excludes VAT (USA)
Hardcover Book
USD   249.99
Price excludes VAT (USA)
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Abbreviations

ANN:

Artificial neural network

BB:

Baseband

BS:

Base station

DRL:

Deep reinforcement learning

DDRL:

Distributed deep reinforcement learning

DP:

Dynamic programming

DQL:

Deep Q-learning

EH:

Energy harvesting

FQL:

Fuzzy Q-learning

MBS:

Macro base stations

MDP:

Markov decision process

MEC:

Multi-access edge computing

MRL:

Multi-agent reinforcement learning

NFV:

Network function virtualization

RAN:

Radio access network

RL:

Reinforcement learning

SBS:

Small base station

SDN:

Software defined networking

SGD:

Stochastic gradient descent

vBS:

Virtual small base station

\(\boldsymbol{A}^t\) :

Operative states (control actions) of the SBSs in slot t

\(\boldsymbol{B}^t\) :

Energy stored in batteries at beginning of slot t

\(\boldsymbol{H}^t\) :

Energy harvested by SBSs in slot t

\({h}^t\) :

Hour of the day in slot t

\({m}^t\) :

Month in slot t

\(\boldsymbol{L}^t\) :

Traffic load generated inside coverage of vSBs in slot t

\(r_t\) :

Scalar reward signal

\(\boldsymbol{X}^t\) :

State of the vSBs in slot t

\(\alpha \) :

Learning rate

\(\varepsilon \) :

Exploration parameter

\(\gamma \) :

Discount factor

References

  1. 3GPP (2017) TR 21.866.; ; E-U; Study on energy efficiency aspects of 3GPP standards v15

    Google Scholar 

  2. 3GPP (2017) TS 38.801.; ; E-U; Study on new radio access technology: radio access architecture and interfaces v14

    Google Scholar 

  3. Abbas N, Zhang Y, Taherkordi A, Skeie T (2018) Mobile edge computing: a survey. IEEE Internet Things J 5(1):450–465

    CrossRef  Google Scholar 

  4. Access EUTR (2010) Further advancements for E-UTRA physical layer aspects, 3GPP TS 36.814, V9. 0.0, Mar

    Google Scholar 

  5. Andrae AS, Edler T (2015) On global electricity usage of communication technology: trends to 2030. MDPI Challenges 6(1):117–157

    CrossRef  Google Scholar 

  6. Auer G, Blume O, Giannini V, Godor I, Imran M, Jading Y, Katranaras E, Olsson M, Sabella D, Skillermark P et al (2013) EARTH Deliverable D2.3: energy efficiency analysis of the reference systems, areas of improvements and target breakdown. Project Deliverable D2.3, www.ict-earth.eu

  7. Bellman R (1957) Dynamic programming. Princeton University Press, Princeton, NJ

    Google Scholar 

  8. Buşoniu L, Babuška R, De Schutter B (2010) Multi-agent reinforcement learning: an overview. Springer, Berlin, pp 183–221

    Google Scholar 

  9. Busoniu L, Babuska R, Schutter BD (2008) A comprehensive survey of multiagent reinforcement learning. IEEE Trans Syst Man Cybern Part C (Appl Rev 38(2):156–172

    Google Scholar 

  10. Chen Q, Zheng Z, Hu C, Wang D, Liu F (2019) Data-driven task allocation for multi-task transfer learning on the edge. In: 2019 IEEE 39th International conference on distributed computing systems (ICDCS), pp 1040–1050

    Google Scholar 

  11. Deng S, Zhao H, Fang W, Yin J, Dustdar S, Zomaya AY (2020) Edge intelligence: the confluence of edge computing and artificial intelligence. IEEE Internet Things J 7(8):7457–7469

    CrossRef  Google Scholar 

  12. Desset C, Debaillie B, Giannini V, Fehske A, Auer G, Holtkamp H, Wajda W, Sabella D, Richter F, Gonzalez MJ et al (2012) Flexible power modeling of LTE base stations. In: 2012 IEEE Wireless communications and networking conference (WCNC). IEEE, 2858–2862

    Google Scholar 

  13. Dulac-Arnold G, Evans R, van Hasselt H, Sunehag P, Lillicrap T, Hunt J, Mann T, Weber T, Degris T, Coppin B (2015) Deep reinforcement learning in large discrete action spaces. arXiv preprint arXiv:1512.07679

  14. ETSI (2017) ES 203 208; Environmental Engineering (EE); Assessment of mobile network energy efficiency (v1.2.1)

    Google Scholar 

  15. Fudenberg D, Levine DK (1998) The theory of learning in games, vol 1. MIT Press Books, The MIT Press

    Google Scholar 

  16. Han F, Zhao S, Zhang L, Wu J (2016) Survey of strategies for switching off base stations in heterogeneous networks for greener 5G systems. IEEE Access 4:4959–4973

    Google Scholar 

  17. Hassan HAH, Nuaymi L, Pelov A (2013) Renewable energy in cellular networks: a survey. In: 2013 IEEE Online conference on green communications (OnlineGreenComm), pp 1–7

    Google Scholar 

  18. Hata M (1980) Empirical formula for propagation loss in land mobile radio services. IEEE Trans Veh Technol 29(3):317–325

    CrossRef  Google Scholar 

  19. Heddeghem WV, Lambert S, Lannoo B, Colle D, Pickavet M, Demeester P (2014) Trends in worldwide ICT electricity consumption from 2007 to 2012. Comput Commun 50:64–76

    CrossRef  Google Scholar 

  20. Kabalci Y (2016) A survey on smart metering and smart grid communication. Renew Sustain Energy Rev 57:302–318

    Google Scholar 

  21. Lara A, Kolasani A, Ramamurthy B (2014) Network innovation using OpenFlow: a survey. IEEE Commun Surv Tutorials 16(1):493–512

    Google Scholar 

  22. Le TP, Vien NA, Chung T (2018) A deep hierarchical reinforcement learning algorithm in partially observable Markov decision processes. IEEE Access 6:49089–49102

    Google Scholar 

  23. Leccese F, Leonowicz Z (2012) Intelligent wireless street lighting system. In: 2012 11th International conference on environment and electrical engineering. IEEE, pp 958–961

    Google Scholar 

  24. Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D (2016) Continuous control with deep reinforcement learning. In: ICLR

    Google Scholar 

  25. Lin Y, Han S, Mao H, Wang Y, Dally WJ (2017) Deep gradient compression: reducing the communication bandwidth for distributed training

    Google Scholar 

  26. Lindemark B, Oberg G (2001) Solar power for radio base station (RBS) sites applications including system dimensioning, cell planning and operation. In: Twenty-third international telecommunications energy conference INTELEC 2001, pp 587–590

    Google Scholar 

  27. López-Pérez D, Ding M, Claussen H, Jafari AH (2015) Towards 1 Gbps/UE in cellular systems: understanding ultra-dense small cell deployments. IEEE Commun Surv Tutorials 17(4):2078–2101

    CrossRef  Google Scholar 

  28. Marhon SA, Cameron CJF, Kremer SC (2013) Recurrent neural networks

    Google Scholar 

  29. Mezzavilla M, Miozzo M, Rossi M, Baldo N, Zorzi M (2012) A lightweight and accurate link abstraction model for the simulation of LTE networks in ns-3. In: Proceedings of the 15th ACM international conference on modeling, analysis and simulation of wireless and mobile systems, MSWiM’12. ACM, New York, pp 55–60

    Google Scholar 

  30. Miozzo M, Zordan D, Dini P, Rossi M (2014) SolarStat: modeling photovoltaic sources through stochastic Markov processes. In: IEEE International energy conference (ENERGYCON), Cavtat, Croatia, pp 688–695

    Google Scholar 

  31. Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, Riedmiller M (2013) Playing Atari with deep reinforcement learning. In: NIPS Deep learning workshop

    Google Scholar 

  32. Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, Hassabis D (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533

    CrossRef  Google Scholar 

  33. Müllner R, Riener A (2011) An energy efficient pedestrian aware smart street lighting system. Int J Pervasive Comput Commun

    Google Scholar 

  34. NEC (n.d.) NFV C-RAN for efficient RAN resource allocation. Online white paper. Accessed on 16 Mar 2016

    Google Scholar 

  35. NREL, National Renewable Energy Laboratory (n.d.) Renewable resource data center. http://www.nrel.gov/rredc/

  36. Park C, Lee J (2020) Mobile edge computing-enabled heterogeneous networks. IEEE Trans Wirel Commun 1

    Google Scholar 

  37. Piovesan N, Dini P (2017) Optimal direct load control of renewable powered small cells: a shortest path approach. Internet Technol Lett e7–n/a. e7

    Google Scholar 

  38. Piovesan N, Dini P (2018) Unsupervised learning of representations from solar energy data. In: 2018 IEEE 29th Annual international symposium on personal, indoor and mobile radio communications (PIMRC), pp 1–6

    Google Scholar 

  39. Piovesan N, Lopez-Perez D, Miozzo M, Dini P (2020) Joint load control and energy sharing for renewable powered small base stations: a machine learning approach. IEEE Trans Green Commun Networking 1

    Google Scholar 

  40. Piovesan N, Miozzo M, Dini P (2020) Modeling the environment in deep reinforcement learning: the case of energy harvesting base stations. In: ICASSP 2020-2020 IEEE International conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 8996–9000

    Google Scholar 

  41. Piovesan N, Temesgene DA, Miozzo M, Dini P (2019) Joint load control and energy sharing for autonomous operation of 5G mobile networks in micro-grids. IEEE Access 7:31140–31150

    CrossRef  Google Scholar 

  42. Piro G, Miozzo M, Forte G, Baldo N, Grieco LA, Boggia G, Dini P (2013) HetNets powered by renewable energy sources: sustainable next-generation cellular networks. IEEE Internet Comput 17(1):32–39

    CrossRef  Google Scholar 

  43. Sewak M (2019) Policy-based reinforcement learning approaches. Springer, Singapore, pp 127–140

    Google Scholar 

  44. Sharma R, Biookaghazadeh S, Li B, Zhao M (2018) Are existing knowledge transfer techniques effective for deep learning with edge devices? In: 2018 IEEE International conference on edge computing (EDGE), pp 42–49

    Google Scholar 

  45. Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. MIT Press, Cambridge

    Google Scholar 

  46. Tang H, Gan S, Zhang C, Zhang T, Liu J (2018) Communication compression for decentralized training. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R (eds) Advances in neural information processing systems 31. Curran Associates Inc., pp 7652–7662

    Google Scholar 

  47. Temesgene DA, Miozzo M, Dini P (2019) Dynamic control of functional splits for energy harvesting virtual small cells: a distributed reinforcement learning approach. Comput Commun 148:48–61

    CrossRef  Google Scholar 

  48. Temesgene DA, Miozzo M, Gunduz D, Dini P (2020) Distributed deep reinforcement learning for functional split control in energy harvesting virtualized small cells. IEEE Trans Sustain Comput 1

    Google Scholar 

  49. Temesgene DA, Piovesan N, Miozzo M, Dini P (2018) Optimal placement of baseband functions for energy harvesting virtual small cells. In: 2018 IEEE 88th Vehicular technology conference (VTC-Fall), pp 1–6

    Google Scholar 

  50. Ton DT, Smith MA (2012) The U.S. department of energy’s microgrid initiative. Electr J 25(8):84–94

    Google Scholar 

  51. Trinh HD, Bui N, Widmer J, Giupponi L, Dini P (2017) Analysis and modeling of mobile traffic using real traces. In: 2017 IEEE 28th Annual international symposium on personal, indoor, and mobile radio communications (PIMRC). IEEE, pp 1–6

    Google Scholar 

  52. Van Moffaert K, Nowé A (2014) Multi-objective reinforcement learning using sets of pareto dominating policies. J Mach Learn Res 15(1):3483–3512

    MathSciNet  MATH  Google Scholar 

  53. Xu F, Li Y, Wang H, Zhang P, Jin D (2017) Understanding mobile traffic patterns of large scale cellular towers in urban environment. IEEE/ACM Trans Networking 25(2):1147–1161

    CrossRef  Google Scholar 

  54. Zhou Z, Chen X, Li E, Zeng L, Luo K, Zhang J (2019) Edge intelligence: paving the last mile of artificial intelligence with edge computing. Proc IEEE 107(8):1738–1762

    CrossRef  Google Scholar 

  55. Zordan D, Miozzo M, Dini P, Rossi M (2015) When telecommunications networks meet energy grids: cellular networks with energy harvesting and trading capabilities. IEEE Commun Mag 53(6):117–123

    Google Scholar 

Download references

Acknowledgements

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 675891 (SCAVENGE) and by Spanish MINECO grant TEC2017-88373-R (5G-REFINE).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marco Miozzo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Verify currency and authenticity via CrossMark

Cite this chapter

Miozzo, M., Piovesan, N., Temesgene, D.A., Dini, P. (2021). Deep Reinforcement Learning for Autonomous Mobile Networks in Micro-grids. In: Koubaa, A., Azar, A.T. (eds) Deep Learning for Unmanned Systems. Studies in Computational Intelligence, vol 984. Springer, Cham. https://doi.org/10.1007/978-3-030-77939-9_8

Download citation