Abstract
LLMs can be vulnerable to prompt injection attacks. Similar to how code injections can alter the behavior of a given program, malicious prompt injection can influence the execution flow of a specific business logic. This is due to their reliance on user-provided text for controlling execution flow. In the context of interactive systems, this poses significant business and cybersecurity risks. Mitigations such as prohibiting the use of LLMs in critical systems, developing prompt and resulting API calls verification tools, implementing security by designing good practices, and enhancing incident logging and alerting mechanisms can be considered to reduce the novel attack surface presented by LLMs.
Chapter PDF
References
OWASP Foundation. Injection Flaws OWASP Foundation. https://owasp.org/www-community/Injection_Flaws/, 2022. Accessed 19 Nov 2023.
OWASP Foundation. OWASP Top 10 for LLM Applications. OWASP, 2023.
OWASP Foundation. OWASP Top 10 for LLM Applications - LLM01: Prompt Injection. OWASP, 2023.
Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, and Hang Li. Trustworthy llms: a survey and guideline for evaluating large language models’ alignment. arXiv preprint arXiv:2308.05374, 2023.
Nathan Hamiel - Kudelski Security. Reducing The Impact of Prompt Injection Attacks Through Design - Kudelski Security Research Blog. https://research.kudelskisecurity.com/2023/05/25/reducing-the-impact-of-prompt-injection-attacks-through-design/, 2023. Accessed 19 Nov 2023.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Vogelsang, T. (2024). LLM Controls Execution Flow Hijacking. In: Kucharavy, A., Plancherel, O., Mulder, V., Mermoud, A., Lenders, V. (eds) Large Language Models in Cybersecurity. Springer, Cham. https://doi.org/10.1007/978-3-031-54827-7_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-54827-7_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-54826-0
Online ISBN: 978-3-031-54827-7
eBook Packages: Computer ScienceComputer Science (R0)