Abstract
The National Institute of Standards and Technology (NIST) is a recognized authority on computer security that publishes guidelines and standards for a broad range of technologies, including artificial intelligence (AI). The guidelines include the requirement for LLM decision-making transparency, explainability, testing, and validation to guarantee model reliability and security. Moreover, the NIST has also created standards for cryptography, a critical element of many LLM-based applications, such as secure communication and data encryption. The cryptography standards help ensure that LLM-based applications are secure and resilient against attacks by malicious entities. NIST standards can provide a practical framework for secure and ethical LLM-based application development and deployment. By adhering to these standards, developers and organizations can increase the confidence that their LLM-based applications are dependable, trustworthy, and resistant to attacks.
Chapter PDF
References
MITRE ATT&CK, 2023. https://attack.mitre.org/.
National Institute of Standards and Technology. AI Risk Management Framework, 2023. https://www.nist.gov/itl/ai-risk-management-framework.
The White House. Blueprint for an AI Bill of Rights, 2022. https://www.whitehouse.gov/ostp/ai-bill-of-rights.
National Institute of Standards and Technology. NIST AI RMF Playbook, 2023. https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook.
The OWASP Foundation. OWASP Top 10 for Large Language Model Applications, 2023. https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook.
AI Vulnerability Database, 2023. https://avidml.org.
MITRE ATLAS, 2023. https://atlas.mitre.org/.
National Institute of Standards and Technology. NIST AI Public Working Groups, 2023. https://airc.nist.gov/generative_ai_wg.
Jonathan M. Spring, April Galyardt, Allen D. Householder, and Nathan VanHoudnos. On managing vulnerabilities in ai/ml systems. In Proceedings of the New Security Paradigms Workshop 2020, NSPW ’20, page 111–126, New York, NY, USA, 2021. Association for Computing Machinery.
Sean McGregor, Kevin Paeth, and Khoa Lam. Indexing ai risks with incidents, issues, and variants, 2022.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Majumdar, S. (2024). Standards for LLM Security. In: Kucharavy, A., Plancherel, O., Mulder, V., Mermoud, A., Lenders, V. (eds) Large Language Models in Cybersecurity. Springer, Cham. https://doi.org/10.1007/978-3-031-54827-7_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-54827-7_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-54826-0
Online ISBN: 978-3-031-54827-7
eBook Packages: Computer ScienceComputer Science (R0)