Abstract
A wide adoption of Artificial Intelligence (AI) can be observed in recent years over networking to provide zero-touch, full autonomy of services towards the next generation Beyond 5G (B5G)/6G. However, AI-driven attacks on these services are a major concern in reaching the full potential of this future vision. Identifying how resilient the AI models are against attacks is an important aspect that should be carefully evaluated before adopting these services that could impact the privacy and security of billions of people. Therefore, we intend to evaluate resilience on Machine Learning (ML)-based use case of network traffic classification and attacks on it during model training and testing stages. For this, we use multiple resilience metrics. Furthermore, we investigate a novel approach using Explainable AI (XAI) to detect network classification-related attacks. Our experiments indicate that attacks can clearly affect the model integrity, which is measurable with the metrics and detectable with XAI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arivudainambi, D., KA, V.K., Visu, P., et al.: Malware traffic classification using principal component analysis and artificial neural network for extreme surveillance. Comput. Commun. 147, 50–57 (2019)
Artem, V., Ateya, A.A., Muthanna, A., Koucheryavy, A.: Novel AI-based scheme for traffic detection and recognition in 5G based networks. In: Galinina, O., Andreev, S., Balandin, S., Koucheryavy, Y. (eds.) NEW2AN/ruSMART -2019. LNCS, vol. 11660, pp. 243–255. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30859-9_21
Aryal, K., Gupta, M., Abdelsalam, M.: Analysis of label-flip poisoning attack on machine learning based malware detector. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 4236–4245. IEEE (2022)
Deldjoo, Y., Noia, T.D., Merra, F.A.: A survey on adversarial recommender systems: from attack/defense strategies to generative adversarial networks. ACM Comput. Surv. (CSUR) 54(2), 1–38 (2021)
Demontis, A., et al.: Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In: 28th USENIX Security Symposium (USENIX Security 19), pp. 321–338 (2019)
Eigner, O., et al.: Towards resilient artificial intelligence: survey and research issues. In: 2021 IEEE International Conference on Cyber Security and Resilience (CSR), pp. 536–542. IEEE (2021)
Garcia, N., Alcaniz, T., González-Vidal, A., Bernabe, J.B., Rivera, D., Skarmeta, A.: Distributed real-time SlowDoS attacks detection over encrypted traffic using artificial intelligence. J. Netw. Comput. Appl. (2021)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems (2017)
Nicolae, M.I., et al.: Adversarial robustness toolbox v1.2.0. CoRR 1807.01069 (2018). https://arxiv.org/pdf/1807.01069
Park, S., et al.: Deliverable 2.2 define parameters and elements to construct accountability, resilience, and privacy metrics. European Union, Horizon 2020 SPATIAL (2023)
Pei, J., Zhong, K., Jan, M.A., Li, J.: Personalized federated learning framework for network traffic anomaly detection. Comput. Netw. 209, 108906 (2022)
Sandeepa, C., Siniarski, B., Kourtellis, N., Wang, S., Liyanage, M.: A survey on privacy for B5G/6G: new privacy challenges, and research directions. J. Ind. Inf. Integr., 100405 (2022)
Tian, Z., Cui, L., Liang, J., Yu, S.: A comprehensive survey on poisoning attacks and countermeasures in machine learning. ACM Comput. Surv. (2022)
Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J.: Explainable AI: a brief survey on history, research areas, approaches and challenges. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11839, pp. 563–574. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32236-6_51
Zhang, J., Chen, J., Wu, D., Chen, B., Yu, S.: Poisoning attack in federated learning using generative adversarial nets. In: 2019 18th IEEE International Conference on Trust, Security and Privacy In Computing and Communications/13th IEEE International Conference on Big Data Science and Engineering (TrustCom/BigDataSE), pp. 374–380. IEEE (2019)
Zhang, J., Zhang, J., Chen, J., Yu, S.: GAN enhanced membership inference: a passive local attack in federated learning. In: ICC 2020–2020 IEEE International Conference on Communications (ICC), pp. 1–6. IEEE (2020)
Acknowledgment
This work is partly supported by European Union in SPATIAL (Grant No: 101021808), and Science Foundation Ireland under CONNECT phase 2 (Grant no. 13/RC/2077_P2) projects.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sandeepa, C. et al. (2024). From Opacity to Clarity: Leveraging XAI for Robust Network Traffic Classification. In: Herath, D., Date, S., Jayasinghe, U., Narayanan, V., Ragel, R., Wang, J. (eds) Asia Pacific Advanced Network. APANConf 2023. Communications in Computer and Information Science, vol 1995. Springer, Cham. https://doi.org/10.1007/978-3-031-51135-6_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-51135-6_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-51134-9
Online ISBN: 978-3-031-51135-6
eBook Packages: Computer ScienceComputer Science (R0)