Abstract
This chapter provides a comprehensive overview of security considerations, vulnerabilities, and controls at the application layer for GenAI systems. Analysis of the OWASP Top 10 for LLM applications gives the initial context of security concerns of GenAI Applications. Leading application design paradigms including RAG, ReAct, and agent-based systems are explored, along with their security implications. Major cloud-based AI services and associated security features are discussed. The Cloud Security Alliance’s Cloud Control Matrix is leveraged to evaluate application security controls relevant to GenAI. Examples grounded in banking connect security controls to real-world scenarios. Through multifaceted coverage of risks, design patterns, services, and control frameworks, the chapter equips readers with actionable insights on securing diverse GenAI applications by integrating security across the full application life cycle.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
CSA. (2021). CSA cloud controls matrix (CCM). CSA. Retrieved August 30, 2023, from https://cloudsecurityalliance.org/research/cloud-controls-matrix/
Dastin, J. (2023, July 26). Exclusive: Amazon has drawn thousands to try its AI service competing with Microsoft, Google. Reuters. Retrieved August 30, 2023, from https://www.reuters.com/technology/amazon-has-drawn-thousands-try-its-ai-service-competing-with-microsoft-google-2023-07-26/
Embrace The Red. (2023, May 16). ChatGPT plugins: Data exfiltration via images & cross plugin request forgery · Embrace The Red. Embrace The Red. Retrieved November 23, 2023, from https://embracethered.com/blog/posts/2023/chatgpt-webpilot-data-exfil-via-markdown-injection/
Farah, H. (2023, August 30). UK cybersecurity agency warns of chatbot ‘prompt injection’ attacks. The Guardian. Retrieved August 30, 2023, from https://www.theguardian.com/technology/2023/aug/30/uk-cybersecurity-agency-warns-of-chatbot-prompt-injection-attacks
Google. (2023). Responsible AI | Vertex AI. Google Cloud. Retrieved August 30, 2023, from https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai
GYONGYOșI, L. (2023, February 1). Server-side request forgery attack explained: Definition, types, protection. Heimdal Security. Retrieved August 30, 2023, from https://heimdalsecurity.com/blog/server-side-request-forgery-attack/
HackerNoon. (2023, May 9). Exploring cross-site scripting (XSS): Risks, vulnerabilities, and prevention measures. HackerNoon. Retrieved August 30, 2023, from https://hackernoon.com/exploring-cross-site-scripting-xss-risks-vulnerabilities-and-prevention-measures
Hooson, M. (2023, August 28). Meet Claude 2, touted as the ‘ethical’ rival to ChatGPT. Forbes. Retrieved August 30, 2023, from https://www.forbes.com/advisor/in/business/software/claude-2-explained/
Huang, K. (2023, October 6). Top 5 generative AI cybersecurity trends | CSA. Cloud Security Alliance. Retrieved November 23, 2023, from https://cloudsecurityalliance.org/blog/2023/10/06/top-5-cybersecurity-trends-in-the-era-of-generative-ai/
Jun, A. (2023, June 26). FAISS: AI SIMILARITY SEARCH. FAISS is an open-source library… | by Ariharasudhan | Jun, 2023. Medium. Retrieved August 30, 2023, from https://medium.com/@aravindariharan/faiss-ai-similarity-search-6a70d6f8930b
Kerner, S. M. (2023, August 29). Google shows off what’s next for Vertex AI, foundation models. VentureBeat. Retrieved August 30, 2023, from https://venturebeat.com/ai/google-shows-off-whats-next-for-vertex-ai-foundation-models/
LLM Shield. (2023). FAQ. LLM Shield. Retrieved August 16, 2023, from https://llmshield.com/faqs
Microsoft. (2023, July 31). Retrieval augmented generation using Azure Machine Learning prompt flow (preview) - Azure Machine Learning. Microsoft Learn. Retrieved August 30, 2023, from https://learn.microsoft.com/en-us/azure/machine-learning/concept-retrieval-augmented-generation?view=azureml-api-2
Microsoft-1. (2023, July 18). What is Azure OpenAI service? - Azure AI services. Microsoft Learn. Retrieved August 30, 2023, from https://learn.microsoft.com/en-us/azure/ai-services/openai/overview
OpenAI. (2022, January 25). Introducing text and code embeddings. OpenAI. Retrieved August 30, 2023, from https://openai.com/blog/introducing-text-and-code-embeddings
OWASP. (2020). Cross site request forgery (CSRF). OWASP Foundation. Retrieved August 30, 2023, from https://owasp.org/www-community/attacks/csrf.
OWASP. (2023). OWASP top 10 for large language model applications. OWASP Foundation. Retrieved November 23, 2023, from https://owasp.org/www-project-top-10-for-large-language-model-applications/
Poireault, K. (2023, August 8). What the OWASP top 10 for LLMs means for the future of AI security. Infosecurity Magazine. Retrieved August 30, 2023, from https://www.infosecurity-magazine.com/news-features/owasp-top-10-llm-means-future-ai/
Private AI. (2023). What is PrivateGPT? Private AI Docs. Retrieved August 16, 2023, from https://docs.private-ai.com/what-is-privategpt.
Savarese, S. (2023). Toward actionable generative AI. Salesforce Research Blog. Retrieved August 16, 2023, from https://blog.salesforceairesearch.com/large-action-models/
Toonk, A. (2023). Diving into AI: An exploration of embeddings and vector databases. Andree Toonk. Retrieved August 30, 2023, from https://atoonk.medium.com/diving-into-ai-an-exploration-of-embeddings-and-vector-databases-a7611c4ec063
Wiggers, K. (2023, August 24). Meta releases Code Llama, a code-generating AI model. TechCrunch. Retrieved August 30, 2023, from https://techcrunch.com/2023/08/24/meta-releases-code-llama-a-code-generating-ai-model/
Yao, S., & Cao, Y. (2022, October 2). ReAct: Synergizing reasoning and acting in language models. Google Blog. Retrieved August 30, 2023, from https://blog.research.google/2022/11/react-synergizing-reasoning-and-acting.html
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Huang, K., Huang, G., Dawson, A., Wu, D. (2024). GenAI Application Level Security. In: Huang, K., Wang, Y., Goertzel, B., Li, Y., Wright, S., Ponnapalli, J. (eds) Generative AI Security. Future of Business and Finance. Springer, Cham. https://doi.org/10.1007/978-3-031-54252-7_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-54252-7_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-54251-0
Online ISBN: 978-3-031-54252-7
eBook Packages: Business and ManagementBusiness and Management (R0)