Skip to main content
Log in

Guarding 6G use cases: a deep dive into AI/ML threats in All-Senses meeting

  • Published:
Annals of Telecommunications Aims and scope Submit manuscript

Abstract

With the recent advances in 5G and 6G communications and the increasing need for immersive interactions due to pandemic, new use cases such as All-Senses meeting are emerging. To realize these use cases, numerous sensors, actuators, and virtual reality devices are used. Additionally, artificial intelligence (AI) and machine learning (ML) including generative AI can be used to analyze large amount of data generated by 6G networks and devices to enable new applications and services. While AI/ML technologies are evolving, they do not have the same level of security as well-known information technology components. So, AI/ML threats and their impacts can be overlooked. On the other hand, due to inherent characteristics of AI/ML components and design of AI/ML pipeline, AI/ML services can be a target for sophisticated attacks. In order to provide a holistic security view, the effect of AI/ML components should be investigated, threats should be identified, and countermeasures should be planned. Therefore, in this study, which is an extended version of our recent study (Karaçay et al. 2023), we shed the light on the use of AI/ML services including generative large language model scenarios in All-Senses meeting use case and their security aspects by carrying out a threat modeling using the STRIDE framework and attack tree methodology. Additionally, we point out some countermeasures for identified threats.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data availability

No datasets were generated or analyzed during the current study.

References

  1. Karaçay, L, Laaroussi Z, Ujjwal S, Soykan EU (2023) On the security of 6G use cases: AI/ML-specific threat modeling of All-Senses meeting. In: 2023 2nd International conference on 6g networking (6GNet), pp 1–8. https://doi.org/10.1109/6GNet58894.2023.10317758

  2. Khorsandi BM (2022) Targets and requirements for 6G - initial E2E architecture. report, European Union’s Horizon 2020 research and innovation programme. https://hexa-x.eu/wp-content/uploads/2022/03/Hexa-X_D1.3.pdf

  3. Kononenko D, Lempitsky V (2015) Learning to look up: realtime monocular gaze correction using machine learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4667–4675

  4. Lombardi S, Simon T, Saragih J, Schwartz G, Lehrmann A, Sheikh Y (2019) Neural volumes: learning dynamic renderable volumes from images. ACM Trans Graph 38(4). https://doi.org/10.1145/3306346.3323020

  5. Mauri L, Damiani E (2022) Modeling threats to AI-ML systems using STRIDE. Sensors 22(17):6662

    Article  Google Scholar 

  6. Laaroussi Z, Ustundag Soykan E, Liljenstam M, Gülen U, Karaçay L, Tomur E (2022) On the security of 6G use cases: threat analysis of ’All-Senses Meeting’.In: 2022 IEEE 19th annual consumer communications & networking conference (CCNC), pp 1–6. https://doi.org/10.1109/CCNC49033.2022.9700673

  7. Shevchenko N, Chick TA, O’Riordan P, Scanlon TP, Woody C (2018) Threat modeling: a summary of available methods. Technical report, Carnegie Mellon University Software Engineering Institute Pittsburgh United

  8. Lawrence J, Goldman DB, Achar S, Blascovich GM, Desloge JG, Fortes T, Gomez EM, Häberling S, Hoppe H, Huibers A, Knaus C, Kuschak B, Martin-Brualla R, Nover H, Russell AI, Seitz SM, Tong K (2021) Project Starline: a high-fidelity telepresence system. ACM Trans Graph (Proc SIGGRAPH Asia) 40(6)

  9. Akyildiz IF, Guo H (2022) Holographic-type communication: a new challenge for the next decade

  10. Clemm A, Vega MT, Ravuri HK, Wauters T, Turck FD (2020) Toward truly immersive holographic-type communication: challenges and solutions. IEEE Commun Mag 58(1):93–99. https://doi.org/10.1109/MCOM.001.1900272

    Article  Google Scholar 

  11. You D, Doan TV, Torre R, Mehrabi M, Kropp A, Nguyen V, Salah H, Nguyen GT, Fitzek FHP (2019) Fog computing as an enabler for immersive media: service scenarios and research opportunities. IEEE Access 7:65797–65810. https://doi.org/10.1109/ACCESS.2019.2917291

    Article  Google Scholar 

  12. Shi L, Li B, Kim C, Kellnhofer P (2021) Author Correction: Towards real-time photorealistic 3D holography with deep neural networks. Nature 593(7858):13–13. https://doi.org/10.1038/s41586-021-03476-5

    Article  Google Scholar 

  13. Patel K, Mehta D, Mistry C, Gupta R, Tanwar S, Kumar N, Alazab M (2020) Facial sentiment analysis using AI techniques: state-of-the-art, taxonomies, and challenges. IEEE Access 8:90495–90519. https://doi.org/10.1109/ACCESS.2020.2993803

    Article  Google Scholar 

  14. NVIDIA (2020) NVIDIA announces cloud-AI video-streaming platform to better connect millions working and studying remotely

  15. Martinek R, Kelnar M, Vanus J, Koudelka P, Bilik P, Koziorek J, Zidek J (2015) Adaptive noise suppression in voice communication using a neuro-fuzzy inference system. In: 2015 38th International conference on telecommunications and signal processing (TSP), pp 382–386. https://doi.org/10.1109/TSP.2015.7296288

  16. MPAI Community. http://mpai.community

  17. Bibhudatta, D (2023) Generative AI will transform virtual meetings

  18. Hasan R, Hasan R (2021) Towards a threat model and security analysis of video conferencing systems. In: 2021 IEEE 18th annual consumer communications & networking conference (CCNC), pp 1–4. IEEE

  19. Isobe T, Ito R (2021) Security analysis of end-to-end encryption for zoom meetings. IEEE Access 9:90677–90689

    Article  Google Scholar 

  20. Ling C, Balcı U, Blackburn J, Stringhini G (2021) A first look at Zoombombing. In: 2021 IEEE symposium on security and privacy (SP), pp 1452–1467. https://doi.org/10.1109/SP40001.2021.00061

  21. Qamar S, Anwar Z, Afzal M (2023) A systematic threat analysis and defense strategies for the metaverse and extended reality systems. Comput Secur 103127

  22. Challenges AC (2020) AI cybersecurity challenges,threat landscape for artificial intelligence. report, European Union Agency for Cybersecurity, ENISA. https://www.enisa.europa.eu/publications/artificial-intelligence-cybersecurity-challenges

  23. Tabassi E, Burns KJ, Hadjimichael M, Molina-Markham AD, Sexton JT (2019) A taxonomy and terminology of adversarial machine learning. NIST IR, 1–29

  24. Papernot N (2018) A marauder’s map of security and privacy in machine learning: an overview of current and future research directions for making machine learning secure and private. In: Proceedings of the 11th ACM workshop on artificial intelligence and security, pp 1–1

  25. Papernot N, McDaniel P, Sinha A, Wellman MP (2018) SoK: security and privacy in machine learning. In: 2018 IEEE european symposium on security and privacy (EuroS & P), pp 399–414. IEEE

  26. Barreno M, Nelson B, Sears R, Joseph AD, Tygar JD (2006) Can machine learning be secure? In: Proceedings of the 2006 ACM symposium on information, computer and communications security, pp 16–25

  27. Mauri L, Damiani E (2021) STRIDE-AI: an approach to identifying vulnerabilities of machine learning assets. In: 2021 IEEE international conference on cyber security and resilience (CSR), pp 147–154. IEEE

  28. Industry Specification Group (ISG) Securing artificial intelligence (SAI) (2021) Securing Artificial Intelligence (SAI). ETSI, France

  29. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JDea (2020) Language models are few-shot learners. In: Larochelle, H, Ranzato M, Hadsell R, Balcan MF, Lin H (eds) Advances in neural information processing systems, vol 33, pp 1877–1901. Curran Associates, Inc., ???. https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf

  30. Selin J (2019) Evaluation of threat modeling methodologies

  31. National Cyber Security Center (2023) Using attack trees to understand cyber security risk. https://www.ncsc.gov.uk/collection/risk-management/using-attack-trees-to-understand-cyber-security-risk

  32. Microsoft (2021) Microsoft AI Security Risk Assessment, Best Practices and Guidance to Secure AI Systems

  33. Casella D, Lawson L (2022) AI and privacy: everything you need to know about trust and technology. https://www.ericsson.com/en/blog/2022/8/ai-and-privacy-everything-you-need-to-know

  34. Boyd C (2022) Meta blows safety bubble around users after reports of sexual harassment

  35. Hong S, Chandrasekaran V, Kaya Y, Dumitras T, Papernot N (2020) On the effectiveness of mitigating data poisoning attacks with gradient shaping. ArXiv:2002.11497

  36. Nelson B, Barreno M, Chi FJ, Joseph AD, Rubinstein BIP, Saini U, Sutton C, Tygar JD, Xia K (2008) Exploiting machine learning to subvert your spam filter. In: USENIX workshop on large-scale exploits and emergent threats

  37. Oprea A, Vassilev A (2023) Adversarial machine learning: a taxonomy and terminology of attacks and mitigations (draft). Technical report, National Institute of Standards and Technology

    Book  Google Scholar 

  38. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT press, ???

  39. Ustundag Soykan E, Karaçay L, Karakoç F, Tomur E (2022) A survey and guideline on privacy enhancing technologies for collaborative machine learning. IEEE Access 10:97495–97519. https://doi.org/10.1109/ACCESS.2022.3204037

    Article  Google Scholar 

  40. Foundation TO (2023) OWASP API Security Project.https://owasp.org/www-project-api-security/

  41. Foundation TO (2023) OWASP top 10 for large language model applications.https://owasp.org/www-project-top-10-for-large-language-model-applications

Download references

Funding

This work was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) through the 1515 Frontier Research and Development Laboratories Support Program under Project 5169902, and has been partly funded by the European Commission through the Horizon Europe/JU SNS project Hexa-X-II (Grant Agreement no. 101095759).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to the publication and reviewed the manuscript.

Corresponding author

Correspondence to Leyli Karaçay.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Karaçay, L., Laaroussi, Z., ujjwal, S. et al. Guarding 6G use cases: a deep dive into AI/ML threats in All-Senses meeting. Ann. Telecommun. (2024). https://doi.org/10.1007/s12243-024-01031-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12243-024-01031-7

Keywords

Navigation