OWASP Releases Security Checklist for Generative AI Deployment


Information security leaders now have a new tool to get started securely with AI.

The Open Web Application Security Project (OWASP) released the LLM AI Cybersecurity and Governance Checklist.

This 32-page document is designed to help organizations create a strategy for implementing large language models (LLMs) and mitigate the risks associated with using these AI tools.

Sandy Dunn, Chief Information Security Officer (CISO) at Quark IQ and lead author of the checklist, began working on it in August 2023 as an additional support resource for OWASP. Top 10 Security Issues for LLM Applicationsreleased in summer 2023.

“I started the first version to address issues I had noticed in discussions with other CISOs and cybersecurity practitioners. I saw that there was really a lot of confusion about what they should think about and where to start (with AI),” she told Infosecurity.

Steps to Take Before Implementing an LLM Strategy

First, the document provides a list of steps to take before rolling out an LLM strategy, including reviewing your cyber resilience and security training strategies and engaging with leadership on any implementation AI in your workflow.

It also provides an overview of five ways organizations can deploy LLMs, depending on their needs.

“Application areas range from running consumer applications to training proprietary models on private data. Factors such as use case sensitivity, necessary capabilities, and available resources help determine the right balance between convenience and control,” the document reads.

While this list is far from exhaustive, Dunn said understanding these five types of models provides a practical starting framework for evaluating options.

In a second, larger part, the document sets out a list of 13 things to consider when implementing an LLM use case without adding unnecessary risk to your organization.

These include:

  • Business-oriented measures, such as establishing business cases or choosing the right LLM solutions
  • Risk management measures, such as the need to model threats of your use case, monitor AI risks, and implement AI-driven security training and red teaming on AI.
  • Legal, regulatory and policy measures (e.g. establishing compliance requirements, implementing testing, evaluation, verification and validation processes)

A milestone for OWASP’s efforts to save AI

Dunn commented: “The four things I really wanted people to take away from the checklist were:

  1. Generative AI is a very different technology than we have tried to protect organizations with before and it will require a completely different mindset to protect an organization;
  2. AI brings asymmetric warfare: the adversary has an advantage due to the complexity and breadth of the attack surface. The first thing to consider is how quickly attackers will be able to use these tools to accelerate their attacks, which we are already seeing;
  3. Approach the implementation of AI holistically;
  4. Use existing legislation to inform your strategy: Although very few AI laws are currently applicable, many existing laws, such as the EU General Data Protection Regulation (GDPR) and state laws on privacy and security, impact your business’s AI requirements. »

Dunn said she initially included more legal and regulatory information in the first version of the document, but after review, the team felt it was too U.S.-centric and decided to keep that part at a high level.

“I also saw this as something that fit better with the work of the OWASP AI Exchange,” she added.

The OWASP AI Exchange is a platform introduced in 2023 by the OWASP Foundation to be the collaboration hub for aligning AI security standards.

John Sotiropoulos, senior security architect at Kainos and member of the core group behind the OWASP Top 10 LLMs, said the checklist “represents an important step in OWASP’s efforts to protect AI.” .

“Combined with our work in AI Exchange and our collaboration with standards bodies, vendors, and public cybersecurity agencies, the checklist helps OWASP unify AI security guidance. Our membership in US AI Security Consortium (AISIC) will accelerate this trend.

The OWASP Foundation announced that it is joining the American AI Security Institute early February 2024.

Read more: AI Safety Summit: OWASP urges governments to agree on AI safety standards

Leave a comment