Lasso is releasing a significant expansion to its automated AI red teaming engine. New capabilities include cloud agent integrations across AWS and Google Vertex, enabling automatic discovery and scanning of agents running inside third-party cloud environments.
Lasso’s Agentic Red Teaming also includes full OWASP LLM and Agentic Top 10 mapping across all scan components, and one-click runtime policy generation that turns a red team finding into an active guardrail without engineering intervention.

Newly Introduced: Agentic Components of a Lasso Red Teaming Scan
Lasso's scan is built around four components. The first, recon, runs first and feeds everything that follows. The other three are attack modes you select based on your risk surface and time constraints. Run one, run all, or run recon and bespoke only for the fastest high-signal result.
Reconnaissance - Model-aware adversarial recon layer
Adversarial testing needs to start before the first attack fires. Before anything else, Lasso's recon agent reverse engineers the target application. It surfaces the underlying model and its known weaknesses and system prompt. It also assesses whether the application uses memory or RAG, and which tools and APIs it connects to.
This mirrors the way an attacker approaches your agents. Once they know the model, they know its weaknesses. Then, they can calibrate their approach to the specific architecture rather than probing a generic surface. Recon is what makes the rest of the scan targeted rather than speculative, and it feeds directly into the bespoke layer.
Static Attacks- Choose from 300,000 most established AI testing
This is the baseline: a broad, fast sweep against a continuously updated library of 300K+ adversarial payloads, with 100% OWASP LLM and Agentic Top 10 coverage and 500+ new variants added weekly. Categories include prompt injection, jailbreak patterns, content moderation bypasses, obfuscation techniques, hate, privacy violations, and more.
Static testing is the most established form of AI security testing, and it remains valuable. It's fast, it's comprehensive at the model boundary, and it handles known attack patterns reliably. Its limitation is that it only describes part of the system. An application that passes every static test can still be manipulated through the sequence of interactions that precedes a harmful action.

Multi-Turn Attacks - How intent breaks over time
This is where intent starts to break. Multi-turn testing asks a harder question than static: what does this system become after sustained adversarial interaction?
Multi-turn testing is done through offensive agents built by Lasso. They’re rained specifically for extended adversarial conversation. One targets secret extraction, the other targets competitor drift. Both use techniques designed to apply pressure across a full conversation rather than a single exchange. This includes “good cop/bad cop” dynamics that manipulate the emotional register of the interaction to wear down the application's guardrails over time.
What these agents are testing for is fragile intent: the specific conditions under which an application can be redirected away from its intended purpose. That redirection doesn't happen in a single message. It happens across a conversation, through incremental persuasion, context poisoning, and instruction override. Single-turn testing misses it entirely.
Bespoke (High-Agency) Attacks- Intelligent dynamic attack simulation
This is the true agentic layer, and it's what sets Lasso's red teaming apart.
Bespoke attacks deploy autonomous adversaries with specialized roles. These agents run creative, multi-turn attack sequences tailored specifically to each application's intent, tools, and access permissions. They adapt across conversation turns, exploit context windows and agent memory, and chain tool calls to test the full agentic attack surface. Every mission is OWASP-mapped and customizable.

- Knows the underlying model,
- has a hypothesis about the system prompt,
- knows which tools are connected,
- and understands the application's behavioral scope.
That knowledge shapes how it attacks. The system uses all of this context to launch a targeted campaign against the actual risk surface of that specific application.
It’s adaptive attack logic of this kind, evolving as it learns the target, that stands the best chance of securing agentic AI against attackers who aren’t slowing down.
What Makes This Different Than Other AI Red Teaming
Most AI pen testing tools focus on single turn attacks using known techniques. They're useful for model-level coverage, but they don't capture how agentic applications actually fail.
The difference comes down to four things:
Full-Spectrum Testing Capability
Most tools operate in black box mode: they probe the application without knowing what's inside it. Because Lasso's discovery capability surfaces the full picture (model, system prompt, tools, guardrails, configurations), it can also operate in white box mode. This means that it can target vulnerabilities that have a real probability of being exploitable rather than probing blind.
Adaptive Attacks Tailored to the Specific Application
The recon-to-bespoke connection is the core of automated AI red teaming. The attack isn't templated against a generic AI application profile, but conditioned by what Lasso has learned about this application. All of its architecture, access permissions and behavioral boundaries need to be understood in order for the test to give a result that means something.
Offensive AI Agents Doing the Attacking
Rather than static payloads, Lasso uses adversaries that adapt across turns, chain tool calls, and exploit the same surfaces a real attacker would target. And because it sits across the discover, assess, and protect stages of the AI security lifecycle, Lasso has the context to attack with the full picture.
How Long Does an AI Red Teaming Assessment Take?
The answer depends on where you are in the deployment cycle. Before production, run the full scan: all four components across all categories, up to two hours. Fix what it finds, then rerun to validate the fixes held.
From there, every subsequent change triggers a targeted scan: recon plus bespoke, focused on what changed, in 15-20 minutes. That’s fast enough to gate a deployment without slowing one down.
The goal is to run the right test at the right moment: heavier upfront, lighter and faster as the system stabilizes, always triggered by change rather than the calendar.
Securing an AI Application? AI Red Teaming Is Not the Finish Line
The workflow starts with finding a vulnerability and ends with closing it.
When Lasso surfaces a finding, it comes with the exact misconfiguration identified and a recommended guardrail update. That remediation loop is getting tighter. The next version of the platform generates runtime policy updates with a single click.
Once a fix is applied, Lasso re-runs the attack that found the vulnerability: the specific attack, against the specific fix, to confirm it held.
From Agentic Red Team to AI Runtime: The Purple Teaming Loop
Red teaming without remediation stops at the finding. What closes the loop is purple teaming: the connection between red team findings and the runtime policies that act on them.
When Lasso surfaces a vulnerability, it generates a recommended policy automatically. That policy can draw from Lasso's library of out-of-the-box classifiers, or it can be a custom policy generated specifically for the finding, written by AI and scoped to the exact risk identified. As threats evolve and new findings emerge, the runtime protection evolves with it.
The Playbook: When to Run AI Red Teaming
Run automated AI red teaming when any of these are true:
- Before any new agent goes live. No exceptions.
- When the underlying model changes. Even if no code changed. A model update changes behavior, and behavior is what you're testing.
- When inputs change from structured to dynamic. An agent accepting free-form user input has a different attack surface than the one you last tested.
- When a new MCP server, API, or data source is connected. Every new integration expands the attack surface, and retrieval systems are injection vectors. Content the agent reads is content an attacker can write to.
- After any significant system prompt update. System prompts are critical assets. Changes to them warrant a new test.
- On a continuous, lifecycle-tied cadence. Automated red teaming tied to your deployment pipeline means the security posture stays current without requiring anyone to remember to schedule it.
The mindset shift here is treating red teaming the same way engineering teams treat CI/CD: not a gate you open occasionally, but a check that runs every time something ships.
Discovery feeds the recon that makes attacks targeted. Red team findings feed the runtime policies that make protection adaptive. The loop is continuous, and there's no point at which the environment drifts ahead of the security posture. That's the Lasso model, from build time to runtime.
FAQs


