Skip to main content

Distributed Power Controller of Massive Wireless Body Area Networks based on Deep Reinforcement Learning

Abstract

Wireless body area network (WBAN) is encountering a tough challenge in terms of energy efficiency due to multiple realistic factors like increasing scale of network environment, emerging demand of healthcare applications and limited manufacturing technique of sensors. In this work, we address the energy saving issue of WBAN. We consider a layered network framework and hybrid channels with multiple in vivo medium. A distributed power controller is developed based on deep Q-learning algorithm to mitigate the affection of inter-network interference. The proposed power controller utilizes distributed coordinators to learn from WBAN environment and optimize the transmitting power of sensors in the communication. Simulation results demonstrate that our power controller achieves higher performance of energy efficiency compared with two baseline power controllers. Simulation results also demonstrate that proper configuration of proposed power controller of coordinators can significantly achieve the performance gain with the increase of network scale.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

References

  1. 1.

    Movassaghi S, Abolhasan M, Lipman J, Smith D, Jamalipour A (2014) Wireless body area networks: A survey. IEEE Commun Surveys Tutor 16(3):1658–1686

    Article  Google Scholar 

  2. 2.

    Wang R, Liu H, Wang H, Yang Q, Wu D (2019) Distributed security architecture based on blockchain for connected health: architecture, challenges, and approaches. IEEE Wirel Commun 26(6):30–36

    Article  Google Scholar 

  3. 3.

    Luong NC, Hoang DT, Gong S, Niyato D, Wang P, Liang YC, Kim DI (2019) Applications of deep reinforcement learning in communications and networking: A survey. IEEE Commun Surv Tutor 21(4):3133–3174

    Article  Google Scholar 

  4. 4.

    Hussain M, Mehmood A, Khan S, Khan MA, Iqbal Z (2019) A survey on authentication techniques for wireless body area networks. J Syst Archit 101:101655

    Article  Google Scholar 

  5. 5.

    Wu D, Zhang Z, Wu S, Yang J, Wang R (2019) Biologically inspired resource allocation for network slices in 5g-enabled internet of things. IEEE Internet Things J 6(6):9266–9279

    Article  Google Scholar 

  6. 6.

    Javaid N, Abbas Z, Fareed MS, Khan ZA, Alrajeh N (2014) RE-ATTEMPT: A new energy-efficient routing protocol for wireless body area sensor networks. Procedia Comput Sci 10(4):224–231

    Google Scholar 

  7. 7.

    Wu T, Wu F, Redoute JM, Yuce MR (2017) An autonomous wireless body area network implementation towards IoT connected healthcare applications. IEEE Access 5:11413–11422

    Article  Google Scholar 

  8. 8.

    Mohamed M, Joseph W, Vermeeren G, Tanghe E, Cheffena M (2019) Characterization of dynamic wireless body area network channels during walking. EURASIP J Wirel Commun Netw 1:104

    Article  Google Scholar 

  9. 9.

    Moosavi H, Bui FM (2016) Optimal relay selection and power control with quality-of-service provisioning in wireless body area networks. IEEE Trans Wirel Commun 15(8):5497–5510

    Article  Google Scholar 

  10. 10.

    Yang Y, Smith DB, Seneviratne S (2019) Deep learning channel prediction for transmit power control in wireless body area networks. In: 2019 International conference on communications (ICC). IEEE, pp 1–6

  11. 11.

    Kazemi R, Vesilo R, Dutkiewicz E, Liu R (2011) Dynamic power control in wireless body area networks using reinforcement learning with approximation. In: 2011 International symposium on personal, indoor and mobile radio communications. IEEE, pp 2203–2208

  12. 12.

    Liu Z, Liu B, Chen CW (2018) Joint power-rate-slot resource allocation in energy harvesting-powered wireless body area networks. IEEE Trans Vehic Technol 67(12):12152–12164

    Article  Google Scholar 

  13. 13.

    Li S, Hu F, Xu Z, Mao Z, Ling Z, Liu H (2020) Joint power allocation in classified WBANs with wireless information and power transfer. IEEE Internet of Things Journal, early access

  14. 14.

    Wu D, Yan J, Wang H, Wang R (2020) User-centric edge sharing mechanism in software-defined ultra-dense networks. IEEE Journal on Selected Areas in Communications, early access

  15. 15.

    He Y, Liang C, Yu R, Han Z (2018) Trust-based social networks with computing, caching and communications: A deep reinforcement learning approach. IEEE Trans Netw Sci Eng 7(1):66–79

    Article  Google Scholar 

  16. 16.

    He X, Wang K, Huang H, Miyazaki T, Wang Y, Guo S (2018) Green resource allocation based on deep reinforcement learning in content-centric IoT. IEEE Transactions on Emerging Topics in Computing, early access

  17. 17.

    Puterman ML (2014) Markov decision processes: discrete stochastic dynamic programming. Wiley, New York

    MATH  Google Scholar 

  18. 18.

    Hausknecht M, Stone P (2015) Deep recurrent q-learning for partially observable mdps. In: AAAI Fall symposium series

  19. 19.

    Dabney WC (2014) DAdaptive step-sizes for reinforcement learning. In: Ph.D. dissertation

  20. 20.

    Watkins CJ, Dayan P (1992) Q-learning. Mach Learn 8(3-4):279–8C292

    Article  Google Scholar 

  21. 21.

    Hu J, Wellman MP (2003) Nash q-learning for general-sum stochastic games. J Mach Learn Res 4(Nov):1039–1069

    MathSciNet  MATH  Google Scholar 

  22. 22.

    Mismar FB, Brian LE, Ahmed A (2019) Deep reinforcement learning for 5G networks: joint beamforming: Power control, and interference coordination. arXiv:1907.00123

  23. 23.

    Cole KS (1972) Membranes, ions, and impulses: A chapter of classical biophysics, vol 5. University of California Press, Berkeley

    Google Scholar 

  24. 24.

    Gabriel S, Lau RW, Gabriel C (1996) The dielectric properties of biological tissues: II. Measurements in the frequency range 10 Hz to 20 GHz. Phys Med Biol 41(11):2251–8C2269

    Article  Google Scholar 

  25. 25.

    He P, Liu Z, Fu L, Tao Z, Liu J, Tang T, Li Z (2020) Intelligent power controller of wireless body area networks based on deep reinforcement learning. In: International conference on bio-inspired information and communications technologies

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant No. 61901070, 61871062, 61771082, 61801065), partially supported by the Science and Technology Research Program of Chongqing Municipal Education Commission (Grant No. KJQN201900611, KJQN201900604, KJQN201900609), and partially supported by Program for Innovation Team Building at Institutions of Higher Education in Chongqing (Grant No. CXTDX201601020).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Peng He.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

He, P., Liu, M., Lan, C. et al. Distributed Power Controller of Massive Wireless Body Area Networks based on Deep Reinforcement Learning. Mobile Netw Appl 26, 1347–1358 (2021). https://doi.org/10.1007/s11036-021-01751-3

Download citation

Keywords

  • Wireless body area networks
  • Power control
  • Energy efficiency
  • Deep Q-network