Skip to main content

AI Threat Labs

May 28, 2025


What’s in Your AI? Probably Something You Can’t Explain. Meet AI-BOM.

Headshot of Rudy Lai

Rudy Lai

Headshot of Marcelo Sousa

Marcelo Sousa


AI is moving faster than a Formula 1 race car, but without the right guardrails, it’s headed for a crash. From generative models reshaping finance to LLM copilots triaging incidents, every org wants a slice of the AI pie — and they want it yesterday.

But here’s the dirty secret: while developers are shipping, tweaking, and experimenting at full throttle, most security and compliance teams aren’t up to speed. And regulators? Still writing the manual. 

This creates a lack of visibility, which makes AI a black box problem. Customers, auditors, and your own board want answers — and not the hand-wavy kind. They want proof that your AI isn’t leaking data, baking in bias, or acting like an unpredictable toddler with access to nuclear codes.

In the same way that software bills of materials (SBOMs) gave us visibility into the traditional software supply chain, AI-BOMs (AI Bills of Materials) are emerging as the next essential layer, mapping the models, libraries, and datasets that power our AI systems. And it’s time to make them standard practice.

Shadow AI and AI supply chain risks

Tracking what goes into your AI is a pain. Components that power AI systems are even harder to track than those in traditional software. A single line of code calling an API, like OpenAI’s GPT-4, can introduce a powerful, opaque model into a product, and supply chain and software composition analysis (SCA) tools often miss it entirely. Developers frequently rely on model hubs like Hugging Face, which aggregate hundreds of AI models and tools under a single interface. This allows for rapid experimentation, but also means models can be swapped, fine-tuned, or chained together. It is great for speed, but not so much if you want clear visibility or good documentation.

The result? A growing shadow ecosystem of unmanaged and often invisible AI dependencies. Security teams are left trying to assess risk without a map, while developers, eager to innovate, often move faster than traditional security tools can support. To make it worse, AI components don’t just update; they mutate, sometimes daily, introducing new capabilities, vulnerabilities, and ethical implications. Today’s stable release might be tomorrow’s liability.

Meanwhile, regulators are catching up fast. New laws like the EU AI Act and frameworks like the U.S. AI Bill of Rights are beginning to mandate transparency, accountability, and oversight of AI systems. These aren’t just policy changes, they’re compliance requirements. Without the ability to prove what’s running in their systems, they may risk falling short of new regulatory expectations, adding compliance pressure on top of existing security concerns.

Introducing Snyk’s AI-BOM CLI 

To meet this urgent need for visibility, Snyk Labs has developed a new tool: the AI-BOM CLI. It’s built to give teams a fast, clear look at what AI components are actually in play. This command-line interface will allow you to generate an AI Bill of Materials from your codebase automatically and accurately, without placing additional documentation burden on developers.

At its core, the tool scans code repositories for AI components like models, datasets, libraries, and even agent frameworks. For example, it can detect usage of Hugging Face transformers, OpenAI APIs like GPT-4, smol-ai agents, or training datasets referenced in model cards. Snyk’s DeepCode engine powers the tool’s ability to detect embedded AI usage in code, even when there’s no clear manifest or package reference to guide the scan. That’s a huge advantage when dealing with today’s fast-moving, pieced-together AI stacks, where calling a model might be as simple and opaque as invoking a single API endpoint.

The AI-BOM CLI supports output in industry-standard formats, including CycloneDX, ensuring compatibility with existing security tools and audit processes. The result is a human-readable, structured report that details each detected AI component, its version, license, supplier, and usage context, bringing the same level of transparency and traceability that SBOMs brought to software.

Whether you’re trying to meet regulatory expectations, prepare for an audit, or simply understand what AI components your teams are using, the AI-BOM CLI offers a fast, scalable, and developer-friendly way to regain control over your AI supply chain.

Snyk Labs Early Prototype: AI-BOM CLI

Why now, and why Snyk?

AI-BOM isn’t just a new feature. It’s a strategic entry point into the broader AI security conversation. Right now, most orgs are still figuring out just how deep their AI stack goes — and where the risks are hiding. This tool offers something rare: visibility. It answers the foundational question, “What AI are we actually using?” that must be addressed before you can manage risk, meet compliance obligations, or defend against attacks.

This fits right in with Snyk’s DNA and our mission: giving devs and security teams tools that don’t slow them down. But AI-BOM is more than just a list. It lays the groundwork for powerful future capabilities: correlating components with emerging AI vulnerabilities, accelerating automated red teaming by targeting known high-risk elements, and enforcing policies to block unauthorized models, datasets, or licenses in CI/CD workflows.

Built for early adopters, it’s fast, flexible, fit for real-world use, and constantly improved through iteration. Forward-leaning organizations are taking proactive steps now, rather than waiting for the AI landscape to settle. They’re acting now, and with AI-BOM, Snyk allows them to move fast without losing control.

What AI-BOM solves in the real world

AI-BOM really shines when things get messy, in those moments where time, accuracy, and accountability matter most. Like when you’re racing the clock after a vulnerability drops. With AI-BOM, security teams can immediately identify every application that uses that model, pinpoint its location in the codebase, and prioritize remediation. No guesswork. No digging through outdated documentation under pressure.

In a breach scenario, like a compromise involving a major provider, suddenly you need to know where you’re exposed. The ability to map exposure across your entire environment becomes mission-critical. AI-BOM provides that visibility instantly, enabling faster, more targeted incident response.

And when audit time rolls around? You’ll have the receipts. No scrambling, no guesswork. When asked to prove how an AI native application was built, what data it was trained on, or what licenses govern its components, AI-BOM serves as a traceable record. It satisfies transparency requirements and reinforces your commitment to responsible AI practices.

It brings AI security into the development pipeline, catching risky dependencies before they ever reach production.

What’s next + get involved

This first prototype of the AI-BOM CLI is just one step in a much larger journey. As AI continues to reshape how software is built, secured, and governed, visibility into the components that power these systems will be foundational. AI-BOM lays the groundwork for what comes next: a comprehensive AI Security Posture Management (AI-SPM) solution and Snyk’s broader AI Trust Platform tools designed to help organizations secure, monitor, and manage AI across the entire lifecycle.

We’re just getting started and want you to be part of what’s coming next. Sign up for AI incubation updates and, if you’re a Snyk customer, connect with your account executive. Securing AI systems requires new visibility and control, and we’re building it together.