Skip to main content

Advertisement

Log in

Throughput and Lifetime Enhancement of WSNs Using Transmission Power Control and Q-learning

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

In this paper, a Q-learning algorithm is proposed to improve the performance of the routing in the Wireless Sensor Networks (WSNs). We have also combined transmission power control (TPC) method with Q-learning to further improve the performance. In the proposed method, each sensor node is treated as an agent which uses Q-learning for routing decisions in distributed manner and employs TPC for transmission of data packets. In the network, agents with higher residual energy and smaller hop distance to sink are given priority to forward packets to the next hop. A convex energy function is used to calculate the effective distance, which is then used for deciding the power level to be used in sending of a packet. We have also computed and presented the time and space complexity of the proposed QL-TPC protocol. This protocol has been simulated using NS3. The results have been obtained as an average of ten simulation runs. The simulation results show improvement in network performance in term of throughput, end-to-end delay and network lifetime for different network size, packet size and propagation models. The performance of the proposed model is compared with the other existing protocols QLRP, Q-Routing, RBLR and AODV. It is observed that the proposed QL-TPC protocol outperforms all other protocols. Further, the scalability of the protocol is also investigated and observed that our proposed protocol is scalable.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data Availability

Raw data were generated by running experiments on NS3 Simulator. Derived data supporting the findings of this study are available from the corresponding author on request.

Code Availability

Custom code is not available.

References

  1. Kandris, D., Nakas, C., Vomvas, D., & Koulouras, G. (2020). Applications of wireless sensor networks: An up-to-date survey. Applied System Innovation, 3, 14.

    Article  Google Scholar 

  2. Saleem Y, Crespi N, Rehmani MH. & Copeland R. (2019). Internet of things-aided smart grid: Technologies, architectures, applications, prototypes, and future research directions.

  3. Alexandrov, A. & Monov, V. (2018). Q-learning based model of node transmission power management in WSN. In Proceedings of international conference on big data, knowledge and control systems engineering BdKCSE, pp. 15–112.

  4. Tam, N. T., Dung, D. A., Hung, T. H., Binh, H. T. T., & Yu, S. (2020). Exploiting relay nodes for maximizing wireless underground sensor network lifetime. Applied Intelligence, 50, 4568–4585.

    Article  Google Scholar 

  5. Xu, Y. H., Xie, J. W., Zhang, Y. G., Hua, M. & Zhou, W (2020). Reinforcement learning (RL)-based energy efficient resource allocation for energy harvesting-powered wireless body area network. Sensors.

  6. Xu, C., Xiong, Z., Zhao, G., & Shui, Y. (2019). An energy-efficient region source routing protocol for lifetime maximization in WSN. IEEE Access, 7, 135277–135289.

    Article  Google Scholar 

  7. Yetgin, H., Kent, T. K. C., Mohammed, E., & Lajos, H. H. (2017). A survey of network lifetime maximization techniques in wireless sensor networks. IEEE Communications Surveys & Tutorials, 19(2), 828–854.

    Article  Google Scholar 

  8. Bhardwaj, M., Garnett, T., & Chandrakasan, A. P. (2001). Upper bounds on the lifetime of sensor networks. In Proceeding of IEEE ICC, vol. 3, pp. 785–790.

  9. Wei, Z., et al. (2019). A Q-learning algorithm for task scheduling based on improved SVM in wireless sensor networks. Computer Networks, 161, 138–149.

    Article  Google Scholar 

  10. Savaglio, C., Pace, P., Aloi, G., Liotta, A., & Fortino, G. (2019). Lightweight reinforcement learning for energy efficient communications in wireless sensor networks. IEEE, 7, 29355–29364.

    Google Scholar 

  11. Le, K., Nguyen, T. H., Nguyen, K. & Nguyen, P. L. (2019). Exploiting Q-learning in extending the network lifetime of wireless sensor networks with holes. In IEEE 25th international conference on parallel and distributed systems (ICPADS), Tianjin, China. pp. 602–609.

  12. Donta, P. K., Amgoth, T. & Annavarapu, C. S. R. (2020). Congestion-aware data acquisition with Q-learning for wireless sensor networks. In 2020 IEEE international IOT, electronics and mechatronics conference (IEMTRONICS).

  13. Hung, C. W., Hsu, W. T. & Hsia, K. H. (2019). Using adaptive data rate with DSSS optimization and transmission power control for ultra-low power WSN. In 2019 12th international conference on developments in eSystems engineering (DeSE).

  14. Kumar, S., Gautam, P. R., Verma, A., Rashid, T., & Kumar, A. (2020). An energy-efficient transmission in WSNs for different climatic conditions. Wireless Personal Communications, 110, 423–444.

    Article  Google Scholar 

  15. Hu, S., & Wang, X. (2020). Game theory on power control in wireless sensor networks based on successive interference cancellation. Wireless Personal Communications, 111, 33–45.

    Article  Google Scholar 

  16. Oddi, G., Pietrabissa, A. & Liberati, F. (2014). Energy balancing in multi-hop wireless sensor networks: An approach based on reinforcement learning. In 2014 NASA/ESA conference on adaptive hardware and systems (AHS), Leicester, UK, pp. 262–269. https://doi.org/10.1109/AHS.2014.6880186.

  17. Debowski, B., Petros S. & Shawki A. (2016). Q-learning enhanced gradient based routing for balancing energy consumption in WSNs. In 2016 IEEE 21st international workshop on computer aided modelling and design of communication links and networks (CAMAD), pp. 18–23.

  18. Jayarajan, P., Kanagachidambaresan, G. R., Sundararajan, T. V. P., Sakthipandi, K., Maheswar, R., & Karthikeyan, A. (2020). An energy-aware buffer management (EABM) routing protocol for WSN. The Journal of Supercomputing, 76(6), 4543–4555.

    Article  Google Scholar 

  19. Yan, J., Mengchu, Z., & Zhijun, D. (2016). Recent advances in energy-efficient routing protocols for wireless sensor networks: A review. IEEE Access, 4, 5673–5686.

    Article  Google Scholar 

  20. Al-Rawi, H. A. A., Ming, A. N., & Kok-Lim, A. Y. (2015). Application of reinforcement learning to routing in distributed wireless networks: A review. Artificial Intelligence Review, 43(3), 381–416.

    Article  Google Scholar 

  21. Del-Valle-Soto, C., Mex-Perera, C., Nolazco-Flores, J. A., Velázquez, R., & Rossa-Sierra, A. (2020). Wireless sensor network energy model and its use in the optimization of routing protocols. Energies, 13, 728.

    Article  Google Scholar 

  22. Wang, N. C., & Hsu, W. J. (2020). Energy efficient two-tier data dissemination based on Q-learning for wireless sensor networks. IEEE Access, 8, 74129–74136.

    Article  Google Scholar 

  23. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. MIT Press.

    MATH  Google Scholar 

  24. Boyan, J. A. & Littman, M. L. (1994). Packet routing in dynamically changing networks: A reinforcement learning approach. In Advances in neural information processing systems, pp. 671–678.

  25. Künzel, G., Gustavo, P. C., Ivan, M., & Carlos, E. P. (2018). Weight adjustments in a routing algorithm for wireless sensor and actuator networks using Q-learning. IFAC-PapersOnLine, 51(10), 58–63.

    Article  Google Scholar 

  26. Yun, W.-K., & Yoo, S.-J. (2021). Q-learning-based data-aggregation-aware energy-efficient routing protocol for wireless sensor networks. IEEE Access, 9, 10737–10750. https://doi.org/10.1109/ACCESS.2021.3051360

    Article  Google Scholar 

  27. Guo, W., Cairong, Y., & Ting, L. (2019). Optimizing the lifetime of wireless sensor networks via reinforcement-learning-based routing. International Journal of Distributed Sensor Networks, 15(2), 1550147719833541.

    Article  Google Scholar 

  28. Chincoli, M. & Liotta, A. (2018). Transmission power control in WSNs: From deterministic to cognitive methods. In Integration, interconnection, and interoperability of IoT systems.

  29. Peng, W., Li, C., Zhang, G., & Yi, J. (2020). Interval type-2 fuzzy logic based transmission power allocation strategy for lifetime maximization of WSNs. Engineering Applications of Artificial Intelligence, 87, 103269.

    Article  Google Scholar 

  30. Pal, A. (2020). Transmit power reduction≠ proportional power savings: Applicability of transmit power control in large-scale wireless sensor networks. IEEE Internet of Things Magazine, 3(1), 20–24.

    Article  Google Scholar 

  31. Sung, Y., Ahn, E., & Cho, K. (2013). Q-learning reward propagation method for reducing the transmission power of sensor nodes in wireless sensor networks. Wireless personal communications, 73(2), 257–273.

    Article  Google Scholar 

  32. Arunita, K., & Lobiyal, D. K. (2021). Q-learning based routing protocol to enhance network lifetime in WSNs. International Journal of Computer Networks and Communication, 13(2), 57–80. https://doi.org/10.5121/ijcnc.2021.13204

    Article  Google Scholar 

  33. Ns-3 network simulator. https://www.nsnam.org/.

  34. Thomas Williams and Colin Kelley. Gnuplot. http://www.gnuplot.info/.

  35. Carneiro, G., Fortuna, P. & Ricardo, M. (2009). FlowMonitor–a network monitoring framework for the network simulator 3 (NS-3). https://doi.org/10.4108/ICST.VALUETOOLS2009.7493.

Download references

Funding

No funds, grants, or other support was received.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arunita Kundaliya.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kundaliya, A., Kumar, S. & Lobiyal, D.K. Throughput and Lifetime Enhancement of WSNs Using Transmission Power Control and Q-learning. Wireless Pers Commun 132, 799–821 (2023). https://doi.org/10.1007/s11277-023-10622-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-023-10622-x

Keywords

Navigation