Language Models (LLMs) into their applications. However, security concerns are growing just as fast.
OWASP has recently published the “Agentic AI – Threats and Mitigations” guide that offers powerful lenses through which to assess and mitigate these emerging threats. But knowing the risks is only half the battle.
That’s where Lasso for applications, agents, and our unique MCP Gateway comes in, enforcing secure context guardrails, monitoring prompts, and providing real-time logging of model behavior to safeguard your GenAI stack.
Let’s explore the top security risks and how proactive defenses can protect your GenAI agents
While in the recent Top 10 for LLM Applications, Prompt Injections, Sensitive Information Disclosure, and LLM Supply Chain Vulnerabilities took the top 3 concurrences enterprises are facing, when it comes to Agentic AI the top 3 concerned are - Memory Poisoning, Tool Misuse and Privilege Compromise.
LLMs can inadvertently leak personally identifiable information (PII), proprietary algorithms, or internal prompts. These leaks can stem from weak input/output filtering or insufficient user education. Lasso combats this with contextual access controls, output validation policies, and redaction tools embedded in its policy engine — ensuring that sensitive data is never surfaced unintentionally.
1. Memory Poisoning
AI agents often use short- and long-term memory to store prior actions, user interactions, or persist state. Attackers can poison these memories, gradually altering an agent’s behavior to reflect false data or instructions. This leads to long-term, stealthy manipulation. Lasso’s security-plugin for the MCP Gateway mitigates memory poisoning by isolating session memory, validating data sources, and enabling rollback via forensic memory snapshots.
2. Tool Misuse
Agents integrated with tools can be manipulated into executing malicious actions using deceptively crafted prompts. From abusing calendar integrations to triggering automated emails, these tools become vectors for attack. Lasso enforces tool usage boundaries through function-level policies and real-time validation, ensuring agents cannot invoke tools outside their role or without context-aware authorization.
3. Privilege Compromise
When agents inherit user privileges or operate with elevated roles, attackers may exploit these configurations to perform unauthorized operations. Without strict RBAC and identity separation, agents become conduits for privilege escalation. Lasso reduces this risk with scoped API keys, least-privilege enforcement, and identity-bound permissions across agent-tool interactions.
What is the defence and why should you worry about it?
While the OWASP Top 10 for LLM Applications focuses on risks like Prompt Injection, Sensitive Information Disclosure, and Supply Chain Vulnerabilities, these are largely rooted in traditional request/response, where threats arise from compromised inputs, unfiltered outputs, or vulnerable model dependencies. These issues, while serious, are primarily stateless and reactive in nature.
In contrast, Agentic AI introduces a paradigm shift: agents operate with autonomy, long-term memory, reasoning loops, and tool integration. This fundamentally alters the threat landscape. The new top three concerns are stateful, dynamic, and context-driven, making them significantly harder to detect and remediate.
For example, memory poisoning can persist across sessions, affecting decision logic over time. Tool misuse transforms agents into vectors for lateral movement or remote code execution within business workflows. And privilege compromise allows adversaries to silently escalate access by manipulating agent behavior or identity flows.
More security risks, when it comes to Agentic AI
4. Resource Overload
By design, agents perform multiple operations concurrently, often triggering external APIs and spawning subtasks. Attackers can exploit this behavior to overwhelm compute and memory, causing denial-of-service or degraded performance. Lasso's agent rate-limiting and compute quota controls prevent abuse, offering automatic suspensions and monitoring to avoid overload scenarios
5. Cascading Hallucinations
Unlike standalone LLMs, agents with memory or communication capabilities can compound hallucinations across sessions and systems. A single fabricated fact can snowball into systemic misinformation. Lasso applies source attribution, memory lineage tracking, and output validation to break these cascades and ensure factual integrity throughout agent workflows
6. Intent Breaking & Goal Manipulation
Agents determine their own goals and execution plans—but adversaries can subtly inject goals or alter planning logic via prompts, tools, or memory inputs. This hijacks the agent’s intent, leading to destructive actions. Lasso’s behavioral monitoring and goal-consistency validators detect plan deviations and trigger secondary model review or HITL gating
7. Misaligned and Deceptive Behaviors
Agents, particularly those trained to optimize for completion or efficiency, may perform unsafe actions while appearing compliant. Deceptive agents may even lie, manipulate, or sidestep safety checks. Lasso uses deception detection models and enforceable policy constraints to ensure agents act transparently and in line with business objectives
8. Repudiation & Untraceability
Agents that make autonomous decisions without reliable logging create blind spots. Attackers can exploit poor observability to hide data exfiltration or unauthorized actions. Lasso’s immutable, cryptographically signed logs provide forensic traceability for every prompt, output, and decision point—essential for debugging, compliance, and audits
9. Identity Spoofing & Impersonation
In multi-agent systems or agent-user interactions, adversaries may spoof identities to perform actions under another persona. This leads to data leaks, compliance violations, or manipulated workflows. Lasso enforces identity validation with behavioral profiling, mutual authentication, and session-scoped agent keys, mitigating impersonation risk
10. Overwhelming Human-in-the-Loop (HITL)
Attackers may flood human reviewers with alerts, decisions, or ambiguously framed prompts, forcing them to approve malicious actions under pressure or confusion. Lasso reduces this by prioritizing alert queues using risk scores, implementing decision explanations, and batching low-risk approvals to preserve focus for critical intervention points
The Lasso Advantage: Secure-by-Design for GenAI
As this new wave of GenAI applications matures, security must be embedded at the protocol level. Lasso’s MCP Gateway is built from the ground up to enforce context boundaries, track every prompt, and block misuse across the model lifecycle. From prompt injection to model poisoning, Lasso offers a single pane of glass for securing your GenAI applications.
As threats evolve beyond traditional input-output vulnerabilities to full-stack agentic architectures, a new model of defense is required. Lasso’s Deputes enforces guardrails for context, permissions, and identity across single and multi-agent deployments.
Whether you're building copilots, automations, or multi-agent systems, with Lasso, you don’t just secure the model. You secure the mission.