Abstract
Benefiting from the high-speed transmission and super-low latency, the Fifth Generation (5G) networks are playing an important role in contemporary society. The accessibility and friendly experience provided by 5G results in the generation of massive data, which are recklessly transmitted in various forms and in turn, promote the development of big data and intelligent decision support systems (DSS). Although AI (Artificial Intelligence) can boost DSS to obtain high recognition performance on large-scale data, an adversarial sample generated by deliberately adding subtle noise to a clear sample will cause AI models to give false output with high confidence, which increases concerns about AI. It is necessary to enhance its interpretability and security when adopting AI in areas where decision-making is crucial. In this paper, we study the challenges posed by the next-generation DSS in the era of 5G and big data. To build trust in AI, the saliency map is adopted as a visualization method to reveal the vulnerability of neural networks. The visualization method is further taken to identify imperceptible adversarial samples and reasons for the misclassification of high-accuracy models. Finally, we conduct extensive experiments on large-scale datasets to verify the effectiveness of the visualization method in enhancing AI security for 5G-enabled DSS.
Similar content being viewed by others
Data availability
The data presented in this study are available on request from the corresponding author on reasonable request.
References
Adil, M., Khan, M.K.: Emerging IoT applications in sustainable smart cities for COVID-19: network security and data preservation challenges with future directions. Sustain. Cities Soc. 75, 103311 (2021)
Wang, J., Liu, Y., Niu, S., Song, H.: Extensive throughput enhancement for 5G-enabled UAV swarm networking. IEEE J. Miniat. Air Space Syst. 2(4), 199–208 (2021)
Liu, X., Chen, M., Liu, Y., Chen, Y., Cui, S., Hanzo, L.: Artificial intelligence aided next-generation networks relying on UAVs. IEEE Wirel. Commun. 28(1), 120–127 (2021)
Li, D.: 5G and intelligence medicine—how the next generation of wireless technology will reconstruct healthcare? Precis. Clin. Med. 2(4), 205–208 (2019)
Duan, W., Gu, J., Wen, M., Zhang, G., Ji, Y., Mumtaz, S.: Emerging technologies for 5G-IoV networks: applications, trends and opportunities. IEEE Network 34(5), 283–289 (2020)
Dargan, S., Kumar, M., Ayyagari, M.R., Kumar, G.: A survey of deep learning and its applications: a new paradigm to machine learning. Arch. Computat. Methods Eng. 27(4), 1071–1092 (2020)
Shafiq, M., Gu, Z.: Deep residual learning for image recognition: a survey. Appl. Sci. 12(18), 8972 (2022)
Y. Taigman, M. Yang, M. Ranzato, and L. Wolf.: DeepFace: closing the gap to human-level performance in face verification. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, pp. 1701–1708 (2014)
Khan, R., Kumar, P., Jayakody, D.N.K., Liyanage, M.: A survey on security and privacy of 5G technologies: potential solutions, recent advancements, and future directions. IEEE Commun. Surv. Tutor 22(1), 196–248 (2020)
Shafiq, M., Gu, Z., Cheikhrouhou, O., Alhakami, W., Hamam, H.: The rise of ‘internet of things’: review and open research issues related to detection and prevention of iot-based security attacks. Wirel. Commun. Mob. Comput. 2022, 1–12 (2022)
D. Soldani.: 5G and the future of security in ICT. In: 2019 29th International Telecommunication Networks and Applications Conference (ITNAC), pp. 1–8 (2019)
Shafiq, M., Gu, Z., Nazir, S., Yadav, R.: Analyzing IoT attack feature association with threat actors. Wirel. Commun. Mob. Comput. 2022, 1–11 (2022)
Gu, Z., et al.: IEPSBP: a cost-efficient image encryption algorithm based on parallel chaotic system for green IoT. IEEE Trans. on Green Commun. Netw. 6(1), 89–106 (2022)
Gu, Z., et al.: Epidemic risk assessment by a novel communication station based method. IEEE Trans. Netw. Sci. Eng. 9(1), 332–344 (2022)
Zhang, D., Zhu, H., Liu, S., Wei, X.: HP-VCS: a high-quality and printer-friendly visual cryptography scheme. J. Vis. Commun. Image Represent. 78, 103–186 (2021)
Ahmad, I., Shahabuddin, S., Kumar, T., Okwuibe, J., Gurtov, A., Ylianttila, M.: Security for 5G and beyond. IEEE Commun. Surv. Tutor. 21(4), 3682–3722 (2019)
A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb.: Learning from Simulated and Unsupervised Images through Adversarial Training. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 2242–2251 (2017)
Mohamad, N.O., Chaturvedi, A., Dras, M., Garain, U.: Pick-Object-attack: type-specific adversarial attack for object detection. Comput. Vis. Image Underst. 211, 103257 (2021)
Guo, W.: Explainable artificial intelligence for 6G: improving trust between human and machine. IEEE Commun. Mag. 58(6), 39–45 (2020)
Pang, H., Xuan, Q., Xie, M., Liu, C., Li, Z.: Research on target tracking algorithm based on Siamese neural network. Mob. Inf. Syst. 2021, 1–11 (2021)
Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)
Sun, Y., Tian, Z., Li, M., Zhu, C., Guizani, N.: Automated attack and defense framework toward 5G security. IEEE Network 34(5), 247–253 (2020)
Zhao Y., Zhu H., Liang R., Shen Q., Zhang S., Chen K.: Seeing isn’t believing: towards more robust adversarial attack against real world object detectors. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London United Kingdom, pp. 1989–2004 (2019)
Dash A., Ye J., Wang G.: A review of generative adversarial networks (GANs) and its applications in a wide variety of disciplines -- from medical to remote sensing. arXiv:2110.01442 [cs], 2021.
Liu, A., Liu, X., Fan, J., Ma, Y., Tao, D.: Perceptual-sensitive GAN for generating adversarial patches. Proc. AAAI Conf. Artif. Intell. 33, 1028–1035 (2019)
Duan R et al.: Adversarial laser beam: effective physical-world attack to DNNs in a blink. (2021)
Qian Y.G., Ma D.F., Wang B., Pan J., Lei J.S.: Spot evasion attacks: adversarial examples for license plate recognition systems with convolutional neural networks. (2019)
Ren, K., Zheng, T., Qin, Z., Liu, X.: Adversarial attacks and defenses in deep learning. Engineering 6(3), 346–360 (2020)
Goodfellow I.J., Shlens J., Szegedy C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings, pp. 1–11 (2015)
Ryu G., Choi D.: Feature-based adversarial training for deep learning models resistant to transferable adversarial examples. IEICE Trans. Inf. & Syst., vol. E105.D, no. 5, pp. 1039–1049, (2022)
Carlini N., Wagner D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, pp. 39–57 (2017)
Combey, T., Loison, A., Faucher, M., Hajri, H.: Probabilistic jacobian-based saliency maps attacks. MAKE 2(4), 558–578 (2020)
Moosavi-Dezfooli S.-M., Fawzi A., Frossard P.: DeepFool: a simple and accurate method to fool deep neural networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574–2582 (2016)
Gu, Z., Hu, W., Zhang, C., Lu, H., Yin, L., Wang, L.: Gradient shielding: towards understanding vulnerability of deep neural networks. IEEE Trans. Netw. Sci. Eng. 8(2), 921–932 (2021)
Gilpin L.H., Bau D., Yuan B. Z., Bajwa A., Specter M., Kagal L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89 (2018)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vis. 128(2), 336–359 (2020)
Fuhrman, J.D., Gorre, N., Hu, Q., Li, H., El Naqa, I., Giger, M.L.: A review of explainable and interpretable AI with applications in COVID-19 imaging. Med. Phys. 49(1), 1–14 (2022)
Garvin, M.R., et al.: Potentially adaptive SARS-CoV-2 mutations discovered with novel spatiotemporal and explainable AI models. Genome. Biol. 21(1), 304 (2020)
Dosilovic K., Brcic M., Hlupic N.: Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, pp. 0210–0215 (2018)
Houben S., Stallkamp J., Salmen J., Schlipsing M., Igel C.: Detection of traffic signs in real-world images: the German traffic sign detection benchmark. In: The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2013)
Satyanarayanan, M.: The emergence of edge computing. Computer 50(1), 30–39 (2017)
Zhuang, F., et al.: A comprehensive survey on transfer learning. Proc. IEEE 109(1), 43–76 (2021)
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3), 211–252 (2015)
Szegedy C., Ioffe S., Vanhoucke V., Alemi A. A.: Inception-v4, inception-ResNet and the impact of residual connections on learning. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, California, USA, pp. 4278–4284 (2017)
Raghu M., Zhang C., Kleinberg J., Bengio S.: Transfusion: understanding transfer learning for medical imaging. In: Advances in Neural Information Processing Systems, 32 (2019)
Liu Z et al.: AutoMix: Unveiling the power of mixup for stronger classifiers. arXiv:2103.13027 [cs], (2022)
Acknowledgements
This work is supported in part by the Major Key Project of PCL (Grant No. PCL2022A03), Guangdong Key R&D Program of China (2019B010136003), Guangdong Higher Education Innovation Group (2020KCXTD007), Guangzhou Higher Education Innovation Group (202032854), and Guangzhou Science and technology program of China (202201010606).
Author information
Authors and Affiliations
Contributions
Denghui Zhang write the paper, Zhaoguan Gu proposed the methodology, and Lijian Ren revise the manuscript, while Muhammad Shafiq proofread, provide software and checking the results.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could appear to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhang, D., Gu, Z., Ren, L. et al. An interpretability security framework for intelligent decision support systems based on saliency map. Int. J. Inf. Secur. 22, 1249–1260 (2023). https://doi.org/10.1007/s10207-023-00689-9
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10207-023-00689-9