Skip to main content
Log in

Optimizing optical network longevity via Q-learning-based routing protocol for energy efficiency and throughput enhancement

  • Published:
Optical and Quantum Electronics Aims and scope Submit manuscript

Abstract

In optical networks, increasing longevity is of critical importance. This article describes a cutting-edge routing protocol based on Q-learning techniques that have been meticulously constructed to extend the lifetime of optical networks by enhancing energy effectiveness and throughput. The protocol dynamically manages energy usage using Q-learning, a reinforcement learning approach. The primary objective is to choose routing algorithms that optimize long-term revenues for individual nodes while increasing energy efficiency. In a detailed study, the protocol's performance is compared to that of well-known rivals such as Low-Energy Adaptive Clustering Hierarchy (LEACH), Multi-Hop Low-Energy Adaptive Clustering Hierarchy (M-LEACH), and Balanced Low-Energy Adaptive Clustering Hierarchy (B-LEACH) (B-LEACH). The evaluation considers several factors, including network durability as measured by active/inactive node ratios, energy efficiency as measured by per-round energy consumption, quality of service as measured by throughput per round, and scalability as measured over networks with 40, 70, and 100 nodes. The complete examination for each network configuration spans over 5,000 cycles. M-LEACH outperforms LEACH and B-LEACH in all performance measures in the simulation results test, establishing a new benchmark. It is fascinating to compare the performance of the unique Q-learning-based protocol to that of LEACH, M-LEACH, and B-LEACH. Regarding network durability, energy efficiency, quality of service, and scalability, the proposed protocol outperforms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

Not applicable.

References

  • Abadi, A.F.E., Asghari, S.A., Marvasti, M.B., Abaei, G., Nabavi, M., Savaria, Y.: RLBEEP: reinforcement-learning-based energy efficient control and routing protocol for wireless sensor networks. IEEE Access 10, 44123–44135 (2022). https://doi.org/10.1109/ACCESS.2022.3167058

    Article  Google Scholar 

  • Abadi, A.F.E., Asghari, S.E., Sharifani, S., Asghari, S.A. and Marvasti, M.B.: A survey on utilizing reinforcement learning in wireless sensor networks routing protocols. In: 2022 13th International Conference on Information and Knowledge Technology (IKT) (pp. 1-7). IEEE

  • Abbasloo, S., Yen, C.Y., Chao, H.J.: Classic meets modern: A pragmatic learning-based congestion control for the internet. In: Proceedings of the Annual conference of the ACM Special Interest Group on Data Communication on the Applications, Technologies, Architectures, and Protocols for Computer Communication, New York, NY, USA, pp 632–647 (2020)

  • Abdollahi, M., Ni, W., Abolhasan, M., Li, S.: Software-defined networking-based adaptive routing for multi-hop multi-frequency wireless mesh. IEEE Trans. Veh. Technol. 70(12), 13073–13086 (2021)

    Article  Google Scholar 

  • Akkaya, K., Younis, M.: A survey on routing protocols for wireless sensor networks. Ad. Hoc. Netw. 3(3), 325–349 (2005). https://doi.org/10.1016/j.adhoc.2003.09.010

    Article  Google Scholar 

  • Alsheikh, M.A., Hoang, D.T., Niyato, D., Tan, H.P., Lin, S.: Markov decision processes with applications in wireless sensor networks: a survey. IEEE Commun. Surveys Tuts. 17(3), 1239–1267 (2015). https://doi.org/10.1109/COMST.2015.2420686

    Article  Google Scholar 

  • Baruah, P. Urgaonkar, R.: Learning-enforced time domain routing to mobile sinks in wireless sensor fields, in Proc. 29th Annu. IEEE Int. Conf. Local Comput. Netw., Tampa, FL, USA, pp. 525–532 (2004)

  • Chen, Y.R., Rezapour, A., Tzeng, W.G., Tsai, S.C.: Rl-routing: an sdn routing algorithm based on deep reinforcement learning. IEEE Trans. Netw. Sci. Eng. 7, 3185–3199 (2020)

    Article  Google Scholar 

  • DiValerio, V., Presti, F.L., Petrioli, C., Picari, L., Spaccini, D., Basagni, S.: CARMA: channel-aware reinforcement learning-based multi-path adaptive routing for underwater wireless sensor networks. IEEE J. Sel. Areas Commun. 37(11), 2634–2647 (2019)

    Article  Google Scholar 

  • El-Semary, M., Diab, H.: BP-AODV: blackhole protected AODV routing protocol for MANETs based on chaotic map. IEEE Access 7, 95197–95211 (2019)

    Article  Google Scholar 

  • Gobinath, J., Hemajothi, S., Leena Jasmine, J.S.: 5Energy-efficient routing protocol with multi-hop fuzzy logic for wireless networks. Intell. Autom. Soft Comput. 36, 2457–2471 (2023)

    Article  Google Scholar 

  • Guo, W., Yan, C., Lu, T.: Optimizing the lifetime of wireless sensor networks via reinforcement-learning-based routing. Int. J. Distrib. Sens. Netw. (2019). https://doi.org/10.1177/1550147719833541

    Article  Google Scholar 

  • Haddad, S., Sayah, J., El-Hassan, B., Kallab, C., Chakroun, M., Turkey, N., Charafeddine, J., Hamdan, H.: Mathematical model with energy and clustering energy based routing protocols as remediation to the directional source aware routing protocol in wireless sensor networks wireless sensor. Network 14, 23–39 (2022). https://doi.org/10.4236/wsn.2022.142002

    Article  Google Scholar 

  • Haseeb, K., Ud-Din, I., Almogren, A., Islam, N., Altameem, A.: RTS: a robust and trusted scheme for IoT-based mobile wireless mesh networks. IEEE Access 8, 68379–68390 (2020)

    Article  Google Scholar 

  • Hu, T., Fei, Y.: QELAR: a machine-learning-based adaptive routing protocol for energy-efficient and lifetime-extended underwater sensor networks. IEEE Trans. Mobile Comput. 9(6), 796–809 (2010). https://doi.org/10.1109/TMC.2010.28

    Article  Google Scholar 

  • Huang, R., Chu, X., Zhang, J., Hu, Y.H., Yan, H.: A machine-learning-enabled context-driven control mechanism for software-defined smart home networks. Sens. Mater. 31, 2103–2129 (2019)

    Google Scholar 

  • Huang, R., Guan, W., Zhai, G., He, J., Chu, X.: Deep graph reinforcement learning based intelligent traffic routing control for software-defined wireless sensor networks. Appl. Sci. 12, 1951 (2022). https://doi.org/10.3390/app12041951

    Article  CAS  Google Scholar 

  • Javed, Z., Yau, K.A., Mohamad, H., Ramli, N., Qadir, J., Ni, Q.: RL-budget: a learning-based cluster size adjustment scheme for cognitive radio networks. IEEE Access 6, 1055–1072 (2018)

    Article  Google Scholar 

  • Kulkarni, R.V., Forster, A., Venayagamoorthy, G.K.: Computational intelligence in wireless sensor networks: a survey. IEEE Commun. Surveys Tuts. 13, 6896 (2011)

    Article  Google Scholar 

  • Liu, W.X.: Intelligent routing based on deep reinforcement learning in software-defined data-center networks. In: Proceedings of the 2019 IEEE Symposium on Computers and Communications (ISCC), Barcelona, Spain,pp. 1–6 (2019)

  • Maivizhi, R., Yogesh, P.: Q-learning based routing for in-network aggregation in wireless sensor networks. Wireless Netw. 27, 2231–2250 (2021). https://doi.org/10.1007/s11276-021-02564-8

    Article  Google Scholar 

  • Malekian, R., Karadimce, A., Abdullah A.H.: AODV and OLSR routing protocols in MANET. In: Proc. IEEE 33rd Int. Conf. Distrib. Comput. Syst. Workshops, pp. 286–289 (2013)

  • Mammeri, Z.: Reinforcement learning based routing in networks: review and classification of approaches. IEEE Access 7, 55916–55950 (2019)

    Article  Google Scholar 

  • Mehmood, A., Lv, Z., Lloret, J., Umar, M.M.: ELDC: an artificial neural network based energy-efficient and robust routing scheme for pollution monitoring in WSNs. IEEE Trans. Emerg. Topics Comput. 8(1), 106–114 (2020). https://doi.org/10.1109/TETC.2017.2671847

    Article  Google Scholar 

  • Meng, Z., Wang, M., Bai, J., Xu, M., Mao, H., Hu, H.: Interpreting deep learning-based networking systems. In: Proceedings of the Annual conference of the ACM Special Interest Group on Data Communication on the Applications, Technologies, Architectures, and Protocols for Computer Communication, New York, NY, USA, pp. 154–171 (2020)

  • Mohammed, Z.H., Chankaew, K., Vallabhuni, R.R., Sonawane, V.R., Ambala, S., Markkandan, S.: Blockchain-enabled bioacoustics signal authentication for cloud-based electronic medical records. Measurement Sens. 26, 100706 (2023). https://doi.org/10.1016/j.measen.2023.100706

    Article  Google Scholar 

  • Mutombo, V.K., Lee, S., Lee, J., Hong, J.: EER-RL: energy-efficient routing based on reinforcement learning. Mobile Inform. Syst. 2021, 1–12 (2021). https://doi.org/10.1155/2021/5589145

    Article  Google Scholar 

  • Mutombo, V.K., Shin, S.Y., Hong, J.: EBR-RL: Energy Balancing Routing protocol based on Reinforcement Learning for WSN In 36th ACM/SIGAPP Symposium on Applied Computing (SAC ’21), March 22–26, 2021, Virtual Event, Republic of Korea. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3412841.3442063

  • Nowe, K., Steenhaut, M., Fakir, Verbeeck, K.: Q-learning for adaptive load based routing in Proc. IEEE Int. Conf. Syst., Man, Cybern., San Diego, CA, USA, pp. 3965–3970 (1998)

  • Perkins, C.E., Royer, E.M.: Ad-hoc on-demand distance vector routing in Proc. 2nd IEEE Workshop Mobile Comput. Syst. Appl. (WMCSA), New Orleans, LA, USA, pp. 90–100, doi:https://doi.org/10.1109/MCSA.1999.749281 (1999)

  • Praveen Kumar, D., Amgoth, T., Annavarapu, C.S.R.: Machine learning algorithms for wireless sensor networks: a survey. Inform. Fus. 49, 1–25 (2019). https://doi.org/10.1016/j.inffus.2018.09.013

    Article  Google Scholar 

  • Razzaque, M.A., Ahmed, M.H.U., Hong, C.S., Lee, S.: Qos-aware distributed adaptive cooperative routing in wireless sensor networks. Ad. Hoc. Netw. 19, 28–42 (2014)

    Article  Google Scholar 

  • Ren, L., Wang, W., Xu, H.: A reinforcement learning method for constraint-satisfied services composition. IEEE Trans. Services Comput. 13(5), 786–800 (2020)

    Article  Google Scholar 

  • Salah, S., Zaghal, R., Abdeljawad, M.: A mathematical-based model for estimating the path duration of the DSDV routing protocol in MANETs. J. Sens. Actuator Netw. 11(2), 23 (2022). https://doi.org/10.3390/jsan11020023

    Article  Google Scholar 

  • Sun, Y., Peng, M., Zhou, Y., Huang, Y., Mao, S.: Application of machine learning in wireless networks: key techniques and open issues. IEEE Commun. Surv. Tuts. 21(4), 3072–3108 (2019)

    Article  Google Scholar 

  • Wang, Z., Crowcroft, J.: Quality-of-service routing for supporting multimedia applications. IEEE J. Sel. Areas Commun. 14(7), 1228–1234 (1996)

    Article  Google Scholar 

  • Wang, M., Cui, Y., Wang, X., Xiao, S., Jiang, J.: Machine learning for networking: workflow, advances and opportunities. IEEE Netw. 32(2), 92–99 (2018)

    Article  Google Scholar 

  • Wang, V., Wang, T.: Adaptive routing for sensor networks using reinforcement learning in Proc. 6th IEEE Int. Conf. Comput. Inf. Technol. (CIT), p. 219 (2006)

  • Younus, M.U., Khan, M.K., Anjum, M.R., Afridi, S., Arain, Z.A., Jamali, A.A.: Optimizing the lifetime of software defined wireless sensor network via reinforcement learning. IEEE Access 9, 259–272 (2020)

    Article  Google Scholar 

  • Yu, C., Lan, J., Guo, Z., Hu, Y.: Drom: optimizing the routing in software-defined networks with deep reinforcement learning. IEEE Access 6, 64533–64539 (2018)

    Article  Google Scholar 

  • Zhang, Y., Huang, Q.: A learning-based adaptive routing tree for wireless sensor networks in Proc. IEEE 3rd Consum. Commun. Netw. Conf., pp. 12–21 (2006)

Download references

Acknowledgements

Thanks to the institute and university research for supporting this work.

Funding

This research received no external funding.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, AVJ; methodology, VJK. KS; software, AVJ; validation, AVJ; formal analysis, VJK. KS; investigation, AVJ; resources, AVJ; data curation, AVJ; writing—original draft preparation, AVJ; writing—review and editing, VJK. KS; visualization, AVJ; supervision, VJK. KS; project administration, VJK. KS. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Ashwini V. Jatti.

Ethics declarations

Conflict of interest

The authors have no conflicts of interest to declare relevant to this article’s content.

Ethical approval

This article does not contain any studies involving human participants performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jatti, A.V., Sonti, V.J.K.K. Optimizing optical network longevity via Q-learning-based routing protocol for energy efficiency and throughput enhancement. Opt Quant Electron 56, 32 (2024). https://doi.org/10.1007/s11082-023-05658-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11082-023-05658-z

Keywords

Navigation