Skip to main content

AI Threat Labs

October 22, 2025


Introducing Evo by Snyk: Securing the New AI-Native Landscape


The emergence of AI-native applications, developed using large language models, has introduced a rapidly evolving security landscape. These agentic, non-deterministic systems learn, adapt, and act autonomously, expanding the attack surface in ways traditional defenses can’t predict. From prompt injection and data poisoning to model theft and sensitive data leakage, every layer of the AI stack introduces new opportunities for exploitation.

At the same time, Shadow AI, the unsanctioned use of AI tools by employees, further blurs visibility and control. The result is a security landscape defined by constant evolution and complexity, one that demands continuous, adaptive, and agentic protection rather than static rules or periodic checks.

Security must lead the charge to unlock the full potential of agentic systems and provide a comprehensive approach for enterprises to tackle AI security challenges. That’s why Snyk is excited to announce Evo, the world's first agentic security orchestration system, designed specifically to secure AI-native applications. 

Evo offers a robust suite of task-oriented security agents, designed to provide comprehensive protection for your AI initiatives. Its capabilities include identifying AI assets, generating an AI Bill of Materials, and securing AI-native applications through integrated threat modeling. Evo also facilitates red teaming exercises and governs developer usage of MCP servers with its MCP Scanning feature.

Evo is itself an AI-native application designed for rapid adaptation, leveraging Snyk's Developer Security platform. It unifies visibility, testing, policy, and remediation through specialized security agents.

You can start your Evo AI Security journey today in two ways: 

  1. Experiment with select AI security tools for free immediately on Snyk Labs.  

  2. Apply to join the Design Partner Program for a broader preview of our enterprise offering through Evo’s Agentic Orchestration System.  

Getting started with select Evo AI security tools for free

Snyk Labs is the innovation engine behind Evo, the place where our R&D team prototypes and releases early AI security tools for open experimentation. As these capabilities mature, they are added to Evo as agentic enterprise-ready capabilities, giving security teams the orchestration power to defend AI-native applications at scale.

Today, we’re giving every Snyk user a first look at Evo’s power through select tools in the CLI. These are experimental, CLI-based tools you can use right now to explore Evo’s first set of capabilities. These capabilities have a dual purpose: to protect AI usage and protect AI applications. On the former, you can scan endpoints to identify usages of MCP servers and associated risks. On the latter, you can generate an AI-BOM and Red Team the code of your AI native applications. These new CLI commands bring hands-on, intelligent security directly into your development workflow, offering every Snyk user a front-row seat to the future of agentic AI protection.

Find shadow AI usage with Snyk AI-BOM 

Modern software increasingly calls on AI models and frameworks, but do you know which ones are actually in use across your codebase? Snyk AI-BOM helps developers and security teams answer that critical question. By generating an AI Bill of Materials (AI-BOM), it creates a complete inventory of AI frameworks, libraries, and LLMs used in your projects.

With the scanner, you can track unapproved AI models, audit AI framework usage, and identify potentially vulnerable AI components. It integrates directly into development workflows, letting teams quickly search repositories for specific models, libraries, or keywords, providing visibility and control over AI dependencies.

Whether you’re auditing AI usage, managing license compliance, or planning migrations, the AI-BOM gives you the insights you need to make informed decisions and stay ahead of risks. 

Automate adversarial testing with Snyk Red Teaming

AI agents are now running workflows. They’re plugged into APIs, databases, and production systems, fetching data, automating decisions, and executing tasks autonomously. While traditional security tools were built for deterministic systems code that behaves the same way every time, AI-native applications are non-deterministic. 

This means they produce new behaviors and outputs based on subtle context shifts or model state. This unpredictability has created an entirely new class of risks, from prompt injection and model exfiltration to data leakage and insecure outputs that static scanners simply can’t detect. 

As AI agents become core to enterprise applications, the need for continuous validation has become critical. Red Teaming has emerged as the most effective way to test and harden these systems, not by merely guessing threats, but by simulating real, adaptive adversaries that probe models the way attackers do.

Red Teaming simulates real-world attacks on AI agents and underlying models, evaluates their behavior in context, and delivers actionable insights. Developers get immediate feedback on risky prompts and behaviors before code ships, while security engineers gain consistent, repeatable tests across agents, allowing them to better manage the risk of AI-native apps. It brings Red Teaming directly into development workflows in a fast and developer-friendly way.

Secure MCP servers with Snyk MCP-Scan

MCP (Model Context Protocol) servers are critical to AI applications, enabling AI assistants to access tools, resources, and external systems, but they can also introduce serious security risks. Snyk MCP-Scan helps developers detect and remediate these risks, including tool poisoning, prompt injection, and insecure server configurations.

Snyk MCP-Scan automatically audits MCP servers across AI applications, identifying vulnerabilities before they can be exploited. 

Snyk brings unmatched depth to MCP server scanning and leads industry understanding of AI supply chain risks with our team of MCP Security pioneers and originators of concepts like Toxic Flows. MCP-scan makes it simple to continuously validate the security of your AI tooling, helping teams maintain trust in their AI workflows while catching risks that traditional security tools miss.

Start your AI security journey today

Evo is available in experimental preview today for design partners, with broader availability in 2026.

Snyk AI-BOM, Snyk AI Red Teaming, and Snyk MCP-Scan are available in experimental preview for Snyk customers. You can test these latest innovations in AI security today, right here in Snyk Labs.

While you’re able to explore the experimental preview capabilities today through Snyk Labs, we’re also expanding access to the full Evo Agentic Security Orchestration System for approved Design Partners

Excited to get started right away? Try out our latest innovations in AI security.