A year ago, we took our best shot at forecasting how GenAI security would evolve. Now, standing at the close of 2025, we’re taking a look at our predictions in light of what actually ended up happening.
Some of our calls were spot-on, others took unexpected turns. But every shift reveals something about how GenAI is growing up.
LLM Security Predictions We Got Right
2025: The Year Of The Agent
As compliance frameworks solidified, enterprises finally had the confidence to let AI do more than just assist. Now, it could act.
AI agents began carrying out tasks autonomously, even coordinating multiple tools and data sources to achieve set goals.
Across industries, agent-driven tools are handling full processes end to end. They’re diagnosing IT issues, or reconciling financial data, even generating code that can pass internal reviews.
Agentic AI has had a transformative impact in a number of key areas:
- Operational acceleration: Enterprises report faster project cycles as agents handle repetitive, rule-based tasks.
- Decision-making at real speed: AI agents can surface and verify information across business units. Leaders benefit from a live view of operational data, rather than quarterly snapshots.
- New infrastructure layers: Standards like the Model Context Protocol (MCP) and new orchestration APIs formalized how AI agents communicate with external systems, turning fragmented automations into connected ecosystems.
Of course, all of these benefits came with an expanded risk surface. At the beginning of the year, security teams were getting the hang of monitoring prompts and outputs. But at this point, they’re managing the actions that models execute, and the data sources they can access.
System Prompt Leakage as Major Vulnerability
We called system prompts the “Achilles’ heel” of LLM security, and 2025 proved us right.
System prompt leakage officially entered the OWASP Top 10 for LLM Applications 2025 as LLM07. This reprioritization shows that the blueprints guiding GenAI behavior are at risk.
Throughout 2025, we saw a surge in “PLeak” (Prompt Leak)–style exploits, where adversaries successfully reconstructed hidden guardrails, policies, and developer instructions from black-box LLM deployments.
Even frontier models weren’t immune. In one public incident, Grok, X’s conversational AI, briefly exposed its internal system prompts. This revealed the hidden instructions behind several of its AI personas. The leak offered a glimpse into the model’s operational constraints and guardrails, a reminder that even well-secured systems can reveal more than intended.
RAG Security Under Attack
The excitement around retrievement augmented generation (RAG) architectures came from their potential as an antidote to hallucinations. It gives LLMs the ability to ground their responses in trusted, up-to-date knowledge bases. But that same mechanism also opened a new and highly exploitable attack surface. By 2025, RAG security had evolved from a theoretical concern into a formalized discipline.
Academic studies and some security research have documented how corpus poisoning, embedding manipulation, and retrieval hijacking can compromise RAG systems.
In response, the industry began embracing security-by-design principles for RAG architectures, emphasizing encryption, granular access control, and ongoing retrieval monitoring. We also saw the rise of specialized frameworks like TrustRAG, which uses K-means clustering to detect and mitigate poisoned embeddings. All of this marks the beginning of a more defensive, context-aware generation of RAG implementations.
AI Compliance Taking Center Stage
When we predicted that 2025 would be the year AI compliance takes center stage, we weren’t exaggerating. But we may have underestimated just how global that shift would become.
Compliance has moved from being a reactive checkbox exercise to a core pillar of enterprise AI maturity. As organizations have transitioned from experimental LLM pilots to full-scale production deployments, governance frameworks and auditability have become non-negotiable.
In the U.S., 2025 saw political back-and-forth. But momentum outside the U.S. has accelerated.
- The EU AI Act moved from draft to enforcement, classifying AI systems by risk level.
- The UK’s AI Principles Framework gained traction as a flexible alternative emphasizing accountability.
- ISO/IEC 42001 was introduced, the first global management standard for AI governance, establishing a common language for responsible deployment.
In the U.S., Local States have started filling the vacuum of a unified federal approach. The most significant move came from California’s SB 53, which Governor Gavin Newsom signed in late 2025: the first U.S. law specifically targeting frontier AI models. SB 53 requires large AI developers to publish safety frameworks, disclose risk assessments, and report critical incidents to state authorities.
Hindsight Is 2025: What We’d Rethink, a Year Later
Domain-Specific LLM Agents: The Hype vs. The Reality
Many expected an explosion in industry-specific LLMs in 2025. As it turns out, agents boomed, but domain specialization didn’t.
Agentic AI certainly did take center stage. Anthropic’s Claude Sonnet 4 and Claude Code led the agent revolution, training models to use tools, retrieve data, and execute tasks across applications.
But domain-specific adoption lagged behind. Instead of verticalized models tailored to industries, enterprises consolidated around a few high-performing, closed-source frontier models.
General-purpose models also learned to act like specialists, by training as multi-modal and multi-tool agents. This blurred the line between general and domain-specific intelligence, delivering specialization through behavior rather than architecture.
So the agentic AI revolution happened, just not in the way we expected. Rather than a proliferation of small, industry-bound models, 2025 was the year general-purpose models learned to act like experts.
Small Models Don’t Necessarily Mean Better Security
In late 2024, there was some evidence to suggest that smaller, domain-specific models would offer stronger security. This seemed likely, because of the reduced attack surfaces and on-premises control that smaller models allow. In practice, 2025 showed that security depends more on architecture than size.
Small models are easier to deploy, but smaller organizations often lack the in-house expertise to secure them effectively. Enterprises, meanwhile, have gravitated toward high-performing frontier models.
Rather than model size, the industry’s attention shifted to architectural security controls like dynamic guardrails, context-based access control (CBAC), and external policy enforcement.
The bottom line: the size of the model is less critical than how intelligently you’re governing it.
The Great Consolidation of 2025
If 2024 was the year of experimentation, 2025 was the year of consolidation. Enterprise AI coalesced around a few dominant frontier models, marking a decisive pivot in both market structure and mindset.
Closed-source models took the lead.
Anthropic overtook OpenAI in enterprise usage (32% vs. 25%), while open-source adoption declined from 19% to 13%. Enterprises doubled down on frontier models, valuing performance, reliability, and compliance over flexibility.
Code generation became GenAI’s first true killer app.
Engineering teams embraced LLM-based code assistants across DevSecOps pipelines, driving $1.9 billion in ecosystem value and redefining how development and security workflows operate.
AI budgets shifted from training to inference.
Rather than building new models, organizations focused on deploying existing ones efficiently. By mid-2025, 74% of startup workloads were already in production, showing just how far GenAI has matured beyond pilot phases.
Over the course of 2025, major security providers responded to this rapid consolidation with acquisitions of smaller, AI-focused firms. The move reflected a growing consensus: keeping pace with the speed and complexity of large language model deployment required specialized expertise that few legacy players possessed internally.
What Comes Next for Secure and Responsible AI?
If 2025 was the year AI security frameworks matured, 2026 will be the year they’re tested. For the enterprise, the next chapter will be all about proving trust at scale. At Lasso, we’ll continue tracking how enterprises operationalize AI governance, and how security can keep pace with an increasingly agentic, interconnected model ecosystem.
.avif)

.png)
.avif)
.avif)

