RSAC 2025 Recap: Agentic AI, Global Recognition, and the Cowboys of GenAI Security

The AI conversation at RSAC 2025 represented a clear shift from previous years. No more speculation, no more prototypes. Agentic AI is real, operational, and rewriting the rules of enterprise security.
From keynote panels to hallway conversations, agentic AI took center stage. These autonomous systems, capable of decision-making and action-taking without human intervention, promise major productivity gains. But they also pose the most complex security challenge enterprises have ever faced.
On the conference show floor, cybersecurity vendors showcased AI's promise in boosting threat detection and resilience, while also acknowledging its risks. For healthcare and other highly regulated, resource-constrained industries, recognizing both the opportunities and challenges of Generative AI is essential.
Lasso entered the conversation with a clear message: securing GenAI needs to be an architecture, not just a feature. We appreciated the opportunity to showcase the platform we’ve built: one that addresses the operational realities of GenAI by design, not as an afterthought.

Lasso Wins Global InfoSec Award:
Next-Gen AI Agentic Application Security
We were also thrilled to win the Cyber Defense Magazine’s Global InfoSec Award for “Next-Gen AI Agentic Application Security.” This accolade recognizes innovation in securing the most advanced AI architectures being deployed today: custom-built, semi-autonomous agents interacting with sensitive enterprise environments.
So what does agentic application security mean?
It means protecting not just the data used by AI, but the actions taken by AI agents. It’s about visibility into what models do, who triggers them, and how guardrails can prevent privilege escalation, data leaks, or system abuse in real time.
“The AI future is agentic. But without the right security, autonomy becomes an attack surface. That’s the risk Lasso was built to close.” – Ophir Dror, CPO & Co Founder
Lasso’s platform delivers:
- Runtime Monitoring of every GenAI interaction
- Red Teaming & Risk Assessments tailored to LLM-specific threats
- Governance & Compliance modules to ensure alignment with AI frameworks like NIST and the EU AI Act

Speaking in California Chamber of Commerce:
A Vision for GenAI Security
I was invited to speak at the California Chamber of Commerce’s RSAC-side event. In my trademark candid style, I distilled the vision that guides Lasso:
“When we started Lasso two years ago, we took on a naive mission: to enable every organization to use GenAI in a safe and secure manner. Today, that’s more urgent than it’s ever been.”
I focused on two fundamental challenges:
- Securing usage: What tools are employees using? What data are they sending to chatbots or code assistants?
- Securing development: What permissions do your internal agents have? What happens when they go off-script?
My message resonated deeply with government and enterprise leaders seeking clarity in the fog of agentic AI hype. We were also thrilled to be joined by Protect AI, who underscored the same trajectory, highlighting that GenAI and LLM security aren’t just niche concerns, but the new center of gravity for the entire cybersecurity industry.
Behind the Scenes at the IEI Cybersecurity Networking Event
RSAC isn’t just about booths and badges. At the IEI Networking Night, Lasso brought its cowboy spirit (boots, belt buckles, and all) into deeper conversations with CISOs, investors, and engineers.
It’s here, in the more informal setting, that key themes surfaced:
- Autonomous agents need constraint frameworks
- Security teams need faster visibility into GenAI behaviors
- Real compliance depends on understanding how AI makes decisions
If you spotted someone explaining prompt injection while holding a whisky, that was probably one of us.

RSAC Trends That Reinforce Lasso’s Roadmap
Judging from this year’s RSAC, the conversation around GenAI has matured. A year ago, people were still asking “can we use it?”. Now, the only question is: “how do we use it safely?”.
Key concerns echoed across panels:
- Prompt injection and jailbreaks
- Agentic privilege abuse
- Lack of visibility into API-based decision chains
- Over-trusting outputs without validation
All of these align with the threats Lasso already secures against. Our commitment to context-based access control (CBAC), real-time policy enforcement, and dynamic red teaming means we’re already ahead of what others are just beginning to map out.
Where We Go From Here: How to Secure Custom-Built AI Agents
Gartner’s 2025 report, How to Secure Custom-Built AI Agents, makes one thing clear: enterprise AI agents are the new attack surface. And most organizations are not ready.
“Through 2029, over 50% of successful cybersecurity attacks against AI agents will exploit access control issues, using direct or indirect prompt injection as an attack vector.” – Gartner*.
Agentic systems inherit user privileges, operate via APIs, and can be manipulated via memory or prompt-based interference. That’s why Lasso integrates protections at every level:
- Discovery of AI agents (shadow deployments, API monitoring)
- Runtime Defense using behavioral anomaly detection
- Credential Isolation and enforcement of access boundaries
- Red teaming & remediation against promptware and agent hijacking
- Governance compliance via dashboards aligned with NIST, MCP, and the AI Act
We're also tracking the Model Context Protocol (MCP), an open source initiative gaining traction for securing agent orchestration, memory handling, and API/tool execution boundaries. As it evolves, you can expect Lasso to integrate with it, too.
Ready to Talk Agentic AI Security?
If you’re building or already using custom GenAI agents in your organization, now is the time to rethink your security posture. The tools you used for traditional apps won’t cut it.
Let’s talk about securing your GenAI future. Book a security assessment with the Lasso team and explore how our unified platform protects your apps, your users, and your AI agents.
*Gartner: How to Secure Custom-Built AI Agents, 17 March 2025- ID G00824390, by Dionisio Zumerle, Jeremy D'Hoinne