The growth of agentic AI use cases is ramping quickly. Gartner estimates that by 2028, 15% of day-to-day work decisions will be made by AI agents. And while that percentage might not sound overwhelming, consider that in a 5,000 employee organization, that would translate to over 1.5 million man-hours per year, and applying a relatively conservative hourly rate, somewhere around $70 million in annual savings. It’s numbers like these that create the downward pressure in most organizations to adopt AI. That pressure will inevitably lead to scenarios where business leaders will have to look past both known and unknown risks posed by new attack vectors and other security risks.
As we highlighted extensively in our recent launch of Snyk Labs, we hear a clear mandate from our customers to take a leadership position in securing this agentic AI future. With this in mind, I’m particularly excited to announce that Snyk has acquired Invariant Labs, a highly innovative AI security research firm and early pioneer in developing safeguards against emerging threats against agentic AI, such as tool poisoning and Model Context Protocol (MCP) vulnerabilities.
Invariant Labs significantly bolsters Snyk's AI security capabilities by bringing deep expertise in addressing emerging AI threats like tool poisoning and Model Context Protocol (MCP) vulnerabilities. Their team of preeminent researchers, along with proven methods such as their "Guardrails" transparent security layer, will advance Snyk's ability to provide real-time defense for AI-native and agentic applications. This acquisition enhances Snyk Labs and positions Snyk to lead in ensuring organizations can securely develop and deploy the next generation of AI software.
A Deep Bench Just Got Deeper: Pioneering Research and Practical Tools for AI Engineers
Snyk prides itself on employing some of the leading researchers in the field of application security as part of our dedicated Security Labs team. These experts are always digging into new vulnerabilities, often uncovering major issues in key software and open source projects long before they hit public databases. All this in-depth, proactive research has long been a cornerstone of our value in enabling strong DevSecOps programs. In recent years, we’ve extended that into a variety of AI-related research areas, and today’s addition provides a major boost to this branch of our research and thought leadership. Invariant Labs boasts an impressive team with deep expertise in securing AI systems, particularly AI agents. Spun out of ETH Zurich (just like the Deepcode AI team that became the backbone of Snyk’s $100M+ AI Native Static Code analysis product), their team includes top AI and security researchers who have built a reputation through ensuring that as AI becomes more autonomous, it remains trustworthy and safe to use. They’ve won numerous awards, including the prestigious SafeBench competition, where their “AgentDojo” project took home first prize. Their deep focus on research fits so neatly into the Snyk mission and culture.
This acquisition brings a powerful suite of Invariant Labs products and foundational research advancements that directly enhance Snyk’s ability to deliver practical, real-world AI security, designed for the complexities faced by AI engineers and security teams:
* Invariant Guardrails: A transparent security layer at the LLM and agent level that allows agent builders and software engineers to augment existing AI systems with strong security guarantees. Guardrails offer crucial detection capabilities for Personally Identifiable Information (PII), secrets, copyright infringement, prompt injection, and harmful or unwanted content. Critically, it also integrates static code analysis (a natural place for Snyk Code integration) and performs image analysis (OCR) and HTML parsing to detect hidden threats within agent interactions. This comprehensive approach addresses the multifaceted attack surface of agentic AI.
* Invariant Explorer: A powerful tool for visualizing and exploring complex agent traces, enabling deep inspection of agent behavior, visual test results, and robust assertions (including fuzzy checkers like Levenshtein distance and LLM-as-a-judge pipelines) for comprehensive security analysis and debugging. It provides unprecedented visibility into the often-opaque actions of AI agents, which is a major pain point for AI developers.
* Invariant Gateway: A lightweight, zero-configuration service that acts as an intermediary (proxy) between AI Agents and LLM providers (e.g., OpenAI, Anthropic, Gemini). It automatically traces agent interactions and stores them in Invariant Explorer, and importantly, it supports guardrailing by intercepting and controlling LLM calls, applying policies to enforce security at runtime. This is critical for managing real-time risks.
* Invariant MCP-scan: A dedicated security scanning tool designed to check installed Model Context Protocol (MCP) servers for common vulnerabilities. It specifically scans for prompt injection attacks in tool descriptions, tool poisoning attacks, and cross-origin escalations, and can manage whitelists of approved entities. This tool directly addresses a rapidly emerging class of vulnerabilities.
Additionally, Invariant Labs' work is rooted in cutting-edge research that tackles the fundamental challenges of securing LLM-based systems. Some of the areas of research they have been leading include:
* Language Model Programming (LMP) and LMQL: This research introduces a high-level query language designed to simplify and control LLM interactions, treating LLM usage more like programming. LMQL allows for specification of constraints via a where clause to enforce legal sequences of tokens during decoding, significantly reducing model queries and improving efficiency. This means more predictable and secure AI behavior.
* Prompt Sketching: A novel prompting paradigm that enables template-guided LLM inference, where an LLM predicts values for multiple variables in a template rather than just completing a prompt. This offers more control over generation, supports reasoning frameworks, and has shown improved performance. For AI developers, this translates to more reliable and controllable AI outputs.
* Model Arithmetic: A principled inference framework for composing and biasing LLMs using simple formula-based composite models, allowing for fine-grained control over attributes like formality and sentiment without retraining. This provides powerful new ways to steer AI behavior towards secure and ethical outcomes.
* Constrained Decoding (DOMINO): An advanced algorithm that enforces strict formal language constraints during LLM text generation to ensure output format adherence, even achieving speedups over unconstrained decoding. This addresses the critical need for reliable and safe AI output formats, preventing common generation pitfalls.
* Data Contamination & Evasion (EAL): Research into Evasive Augmentation Learning (EAL), a rephrasing-based evasion strategy, reveals how models can improve performance on benchmarks while evading current contamination detection methods. This proactive understanding of sophisticated attack vectors is key to building resilient AI systems.
These innovations underscore a proactive approach to securing the AI software supply chain, addressing the "black box" nature and unpredictable behaviors that are key pain points for AI developers and security teams.
Invariant Labs: An Early Leader in Uncovering Real-World Agentic AI Threats
Nothing draws immediate attention to a security issue more than a real-world exploit, and Marc, Luca and the team at Invariant Labs has been at the forefront of identifying, researching and reporting on these, including the very recent GitHub MCP vulnerability that garnered significant market attention. This incident has underscored the potential for even the most advanced AI systems to harbor severe security vulnerabilities. The popular GitHub MCP server, an AI tool developed to aid software developers in their coding tasks, was discovered to have exploitable weaknesses that could be used to execute malicious code. Of particular concern is the fact that the MCP operates in a highly dynamic and interactive environment, where its actions can have immediate and widespread consequences. This occurrence, as well as the recent incident Asana announced around their MCP server, act as a poignant reminders that robust security measures are essential for AI systems, especially those with agentic capabilities.
Preparing to Address the Unique Security Challenges of AI-Native Apps
We believe it’s particularly critical to add the deep subject-matter expertise the Invariant Labs team brings as we evolve our mission for the age of AI-native apps. The mixed perspectives on the timing with which AI agents will have broad adoption and impact is rooted largely in questions of security, and with good reason. Reports indicate that over 90% of organizations lack full confidence in their ability to secure AI-driven data, and nearly 70% cite data leaks as their primary concern in AI adoption.
Addressing these new paradigms is no small feat, especially when the nature of the challenges is still not fully understood by many in the tech community. Agentic AI, with its ability to act autonomously and make decisions, fundamentally disrupts the assumptions that underpin traditional application security. Unlike static applications, AI agents operate in a highly dynamic environment, where their behavior can vary significantly based on the data they process and the context in which they operate. This non-deterministic nature means that their actions are not always predictable, making it difficult to apply traditional pattern or parameter-based security controls.
The interconnected nature of AI agents further complicates securing them. These agents often communicate with multiple systems and services, both within and outside an organization, creating a vast and complex attack surface. Each interaction point represents a potential attack surface, and the fluid roles of components within the application ecosystem mean that traditional static testing methods are insufficient. This variability requires a more adaptive and context-aware approach to security, one that can dynamically adjust to the agent’s changing roles and interactions.
The collaboration between AI developers and security experts is critical in this new environment, but it is a new and emerging field. AI developers are often more focused on the functionality and performance of their AI agents, while security experts are trained to identify and mitigate risks in static, deterministic systems. Bridging this gap requires a new dialogue and partnership, where both parties can work together to understand the unique security challenges posed by agentic AI. This collaboration must extend beyond just identifying vulnerabilities to include the development of new security frameworks and practices that can keep pace with the rapid advancements in AI technology. This is not new territory for Snyk . . . our heritage lies in enabling a stronger collaboration between developers (or builders) and security teams.
Following Through to Build the Future Now
The Snyk AI Trust Platform was launched recently with a clear recognition of the pressing need for secure Agentic AI innovation and a commitment to assisting organizations in this pursuit. Today’s acquisition stands as a testament to Snyk's unwavering dedication to addressing the most pressing customer needs in a fast-moving era where trust and security are essential for successful innovation.
The additional agentic AI capabilities that Invariant Labs brings to Snyk will provide both near-term expertise in an emerging field, as well as longer term technical solutions that integrate seamlessly to create the forward-looking AI security platform enterprises will need to build fast and stay secure. We look forward to sharing more soon about the new research and technical developments with our customers.
If you’re an AI Engineer, AI Security Engineer, Platform leader, technologist, or security leader grappling with the complexities of agentic AI development and deployment, learn more about how Snyk’s now expanded agentic AI security strengths, including Invariant Guardrails, Explorer, and mcp-scan, can help you build and deploy secure AI-native applications.
If you’d like to learn more about Invariant Labs and our plans to leverage their research and technology to enhance our customers AI security journey, please join us for an upcoming webinar by registering here.