Abstract
This chapter explores policies, processes, and procedures to build a robust security program tailored for GenAI models and applications. It discusses key policy elements like goals, risk management, compliance, consequences, and priority areas focused on model integrity, data privacy, resilience to attacks, and regulatory adherence. The chapter also covers specialized processes for GenAI across risk management, development cycles, and access governance. Additionally, it provides details on security procedures for access control, operations, and data management in GenAI systems. Centralized, semi-centralized, and decentralized governance structures for GenAI security are also analyzed. Helpful framework resources including MITRE ATT&CK’s ATLAS Matrix, AI vulnerability databases, the Frontier Model Forum, Cloud Security Alliance initiatives, and OWASP’s Top 10 LLM Application risks are highlighted.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
CSA. (2023). Security implications of ChatGPT | CSA. Cloud Security Alliance. Retrieved August 27, 2023, from https://cloudsecurityalliance.org/artifacts/security-implications-of-chatgpt/
Doerrfeld, B. (2023, August 4). Reviewing the OWASP machine learning top 10 risks. Security Boulevard. Retrieved August 27, 2023, from https://securityboulevard.com/2023/08/reviewing-the-owasp-machine-learning-top-10-risks/
Graves, D., & Nelson, A. (2023). AI risk management framework | NIST. National Institute of Standards and Technology. Retrieved August 15, 2023, from https://www.nist.gov/itl/ai-risk-management-framework
GSA. (2023, June 9). Security policy for generative artificial intelligence (AI) large language models (LLMs). GSA. Retrieved August 15, 2023, from https://www.gsa.gov/directives-library/security-policy-for-generative-artificial-intelligence-ai-large-language-models-llms
Hewko, A. (2021, September 2). What is STRIDE threat modeling | Explanation and examples. Software Secured. Retrieved August 27, 2023, from https://www.softwaresecured.com/stride-threat-modeling/
Klondike, G. (2023, June 6). Threat modeling LLM applications. AI Village. Retrieved August 27, 2023, from https://aivillage.org/large%20language%20models/threat-modeling-llm/
Lin, B. (2023, August 10). AI is generating security risks faster than companies can keep up. The Wall Street Journal. Retrieved August 15, 2023, from https://www.wsj.com/articles/ai-is-generating-security-risks-faster-than-companies-can-keep-up-a2bdedd4
Milmo, D. (2023, July 26). Google, Microsoft, OpenAI and startup form body to regulate AI development. The Guardian. Retrieved August 15, 2023, from https://www.theguardian.com/technology/2023/jul/26/google-microsoft-openai-anthropic-ai-frontier-model-forum
NIST. (2023a, March 8). Adversarial machine learning: A taxonomy and terminology of attacks and mitigations. NIST Technical Series Publications. Retrieved August 15, 2023, from https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2023.ipd.pdf
NIST. (2023b, November 2). NIST seeks collaborators for consortium supporting artificial intelligence safety | NIST. National Institute of Standards and Technology. Retrieved November 22, 2023, from https://www.nist.gov/news-events/news/2023/11/nist-seeks-collaborators-consortium-supporting-artificial-intelligence
OWASP. (2023). OWASP top 10 for large language model applications. OWASP Foundation. Retrieved August 27, 2023, from https://owasp.org/www-project-top-10-for-large-language-model-applications/
Rundquist, K. (2023, July 20). Cloud security alliance announces appointment of Caleb Sima as chair for AI safety initiative. Cloud Security Alliance. Retrieved August 27, 2023, from https://cloudsecurityalliance.org/press-releases/2023/07/20/cloud-security-alliance-announces-appointment-of-caleb-sima-as-chair-for-ai-safety-initiative/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Huang, K., Yeoh, J., Wright, S., Wang, H. (2024). Build Your Security Program for GenAI. In: Huang, K., Wang, Y., Goertzel, B., Li, Y., Wright, S., Ponnapalli, J. (eds) Generative AI Security. Future of Business and Finance. Springer, Cham. https://doi.org/10.1007/978-3-031-54252-7_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-54252-7_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-54251-0
Online ISBN: 978-3-031-54252-7
eBook Packages: Business and ManagementBusiness and Management (R0)