Skip to main content
Log in

Deep Reinforcement Learning Based Active Queue Management for IoT Networks

  • Published:
Journal of Network and Systems Management Aims and scope Submit manuscript

Abstract

Internet of Things (IoT) finds its applications in home, city and industrial settings. Current network is in transition to adopt fog/edge architecture for providing the capacity for IoT. However, in order to deal with the enormous amount of traffic generated by IoT devices and to reduce queuing delay, novel self-learning network management algorithms are required at fog/edge nodes. Active Queue Management (AQM) is a known intelligent packet dropping techique for differential QoS. In this paper, we propose a new AQM scheme based on Deep Reinforcement Learning (DRL) technique and introduce scaling factor in our reward function to achieve the trade-off between queuing delay and throughput. We choose Deep Q-Network (DQN) as a baseline for our scheme, and compare our approach with various AQM schemes by deploying them at the interface of fog/edge node. We simulated them by configuring different bandwidth and round trip time (RTT) values. The simulation results show that our scheme outperforms other AQM schemes in terms of delay and jitter while maintaining above-average throughput, and also verifies that AQM based on DRL is efficient in managing congestion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Notes

  1. Our code is available at: https://github.com/kminsu1204/DQN-based-AQM.

References

  1. ns-3. https://www.nsnam.org/. Accessed 06 Feb 2021

  2. tiny-dnn. https://github.com/tiny-dnn/tiny-dnn. Accessed 06 Feb 2021

  3. Al-Kashoash, H.A.A., Amer, H.M., Mihaylova, L., et al.: Optimization-based hybrid congestion alleviation for 6LoWPAN networks. IEEE Internet Things J. 4, 2070–2081 (2017)

    Article  Google Scholar 

  4. Amar, Y., Haddadi, H., Mortier, R., et al.: An Analysis of Home IoT Network Traffic and Behaviour. In: arXiv preprint arXiv:1803.05368 (2018)

  5. Baker, F., Fairhurst, G.: IETF Recommendations Regarding Active Queue Management. In: Internet Engineering Task Force (IETF), RFC 7567 (2015)

  6. Bhandari, S., Sharma, S.K., Wang, X.: Latency Minimization in Wireless IoT Using Prioritized Channel Access and Data Aggregation. In: GLOBECOM 2017—2017 IEEE Global Communications Conference (2017)

  7. Bisoy, S.K., Pandey, P.K., Pati, B.: Design of an active queue management technique based on neural networks for congestion control. In: 2017 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS) (2017)

  8. Bouacida, N., Shihada, B.: Practical and Dynamic Buffer Sizing using LearnQueue. IEEE Transactions on Mobile Computing (2018)

  9. Chen, X., Zhang, H., Wu, C., et al.: Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning. IEEE Internet of Things Journal (2018)

  10. Chinchali, S., Hu, P., Chu, T., et al.: Cellular Network Traffic Scheduling With Deep Reinforcement Learning. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

  11. Floyd, S., Jacobson, V.: Random early detection gateways for congestion avoidance. IEEE/ACM Trans. Netw. 1, 397–413 (1993)

    Article  Google Scholar 

  12. Gartner: Gartner says a typical family home could contain more than 500 smart devices by 2022 (2014)

  13. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proc. Thirteen. Int. Conf. Artif. Intell. Stat. 9, 249–256 (2010)

    Google Scholar 

  14. Ha, S., Rhee, I., Xu, L.: CUBIC: a new TCP-friendly high-speed TCP variant. ACM SIGOPS Oper. Syst. Rev. 42, 64–74 (2008)

    Article  Google Scholar 

  15. Hoeiland-Joergensen, T., McKenney, P., Taht, D., et al.: The Flow Queue CoDel Packet Scheduler and Active Queue Management Algorithm. In: Internet Engineering Task Force (IETF), RFC 8290 (2018)

  16. Huang, S.Y.L., Feng, C.: QRED: a Q-Learning-based active queue management scheme. J. Internet Technol. 19, 1169–1178 (2018)

    Google Scholar 

  17. IDC: Worldwide Internet of Things Forecast Update, 2017–2021. In: Document US43304017, IDC (2018)

  18. Jin, W., Gu, R., Ji, Y., et al.: Dynamic traffic aware active queue management using deep reinforcement learning. Electron. Lett. 55, 1084–1086 (2019)

    Article  Google Scholar 

  19. Kingma, D.P., Ba, J.: Adam: A Method for Stochastic Optimization. In: arXiv preprint arXiv:1412.6980v9 (2017)

  20. Kua, J., Nguyen, S.H., Armitage, G., et al.: Using active queue management to assist IoT application flows in home broadband networks. IEEE Internet Things J. 4, 1399–1407 (2017)

    Article  Google Scholar 

  21. Kuhn, N., Natarajan, P., Khademi, N., et al.: Characterization Guidelines for Active Queue Management (AQM). In: Internet Engineering Task Force (IETF), RFC 7928 (2016)

  22. Lin, J., Yu, W., Zhang, N., et al.: A survey on Internet of Things: Architecture, Enabling technologies, security and privacy, and applications. IEEE Internet Things J. 4, 1125–1142 (2017)

    Article  Google Scholar 

  23. Luong, N.C., et al.: Applications of Deep Reinforcement Learning in Communications and Networking: A Survey. In: arXiv preprint arXiv:1810.07862 (2018)

  24. Mathis, M., Dukkipati, N., Cheng, Y.: Proportional Rate Reduction for TCP. In: Internet Engineering Task Force (IETF), RFC 6937 (2013)

  25. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature pp. 529–533 (2015)

  26. Mukherjee, M., Shu, L., Wang, D.: Survey of fog computing: fundamental, network applications, and research challenges. IEEE Commun. Surv. Tutor. 20, 1826–1857 (2018)

    Article  Google Scholar 

  27. Nichols, K., Jacobson, V.: Controlling queue delay. Commun. ACM 55, 42–50 (2012)

    Article  Google Scholar 

  28. Pan, R., Natarajan, P., Piglione, C., et al.: PIE: A lightweight control scheme to address the bufferbloat problem. In: 2013 IEEE 14th International Conference on High Performance Switching and Routing (HPSR) (2013)

  29. Sivanathan, A., Sherratt, D., Gharakheili, H.H., et al.: Characterizing and classifying IoT traffic in smart cities and campuses. In: 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS) (2017)

  30. Sungur, A.: Tcp: random early detection (red) mechanism for congestion control. Master’s thesis, Rochester Institute of Technology (2015)

  31. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, 2nd edn. MIT Press, Cambridge (2018)

    MATH  Google Scholar 

  32. Vucevic, N., Perez-Romero, J., Sallent, O., et al.: Reinforcement Learning for Active Queue Management in Mobile All-IP Networks. In: 2007 IEEE 18th International Symposium on Personal, Indoor and Mobile Radio Communications (2007)

  33. Watkins, C.J.C.H., Dayan, P.: Q-learning. Machine Learning pp. 279–292 (1992)

  34. Xu, Z., Tang, J., Meng, J., et al.: Experience-driven Networking: A Deep Reinforcement Learning based Approach. In: IEEE INFOCOM 2018—IEEE Conference on Computer Communications (2018)

  35. Yousefpour, A., Ishigaki, G., Gour, R., et al.: On reducing IoT service delay via fog offloading. IEEE Internet Things J. 5, 998–1010 (2018)

    Article  Google Scholar 

  36. Zhang, Y., Feng, B., Quan, W., et al.: Theoretical analysis on edge computation offloading policies for IoT devices. IEEE Internet Things J. 6, 4228–4241 (2018)

    Article  Google Scholar 

  37. Zhao, L., Sun, W., Shi, Y., et al.: Optimal placement of cloudlets for access delay minimization in SDN-based internet of things networks. IEEE Internet Things J. 5, 1334–1344 (2018)

    Article  Google Scholar 

  38. Zhao, S., Yang, Y., Shao, Z., et al.: FEMOS: fog-enabled multitier operations scheduling in dynamic wireless networks. IEEE Internet Things J. 5, 1169–1183 (2018)

    Article  Google Scholar 

  39. Zheng, X., Cai, Z., Li, J., et al.: Scheduling flows with multiple service frequency constraints. IEEE Internet Things J. 4, 496–504 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alagan Anpalagan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, M., Jaseemuddin, M. & Anpalagan, A. Deep Reinforcement Learning Based Active Queue Management for IoT Networks. J Netw Syst Manage 29, 34 (2021). https://doi.org/10.1007/s10922-021-09603-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10922-021-09603-x

Keywords

Navigation