Back to all posts

LLM Risks: Enterprise Threats and How to Secure Them

Elad Schulman
Elad Schulman
October 21, 2025
7
min read
LLM Risks: Enterprise Threats and How to Secure Them

The rise of large language models (LLMs) is forcing enterprises to rethink software security from the ground up. These models do not follow the predictable logic of traditional code. They learn, generate, and interact in ways that blur the boundary between user input and system behavior, introducing new risks that conventional security tools were never built to detect.

What Are LLM Risks?

Large language model (LLM) risks refer to the security, privacy, and integrity challenges that arise when GenAI models generate, process, or act on sensitive information. 

Now that LLMs are embedded across applications, workflows, and infrastructure, their risks have become a distinct category of concern, separate from traditional application security. The vulnerabilities that matter most no longer live in code alone, but in the data, prompts, and behaviors that shape how these models operate.

LLM Risks vs. Traditional Application Security

Traditional AppSec has a predictable logic: secure the code, harden the infrastructure. Then monitor for known exploits. LLMs don’t follow this script. As probabilistic systems shaped by training data and contextual signals, their LLM vulnerabilities are woven into how they generate outputs, not just how they’re built.

This is a new and unique threat surface that adversaries can exploit without touching a single line of code.

Category LLM Risks Application Security
Attack Surface Prompts, embeddings, training data, output handling, plugins/APIs Server endpoints, source code, APIs
Data Exposure Sensitive information leakage via outputs, prompt injection, model inversion SQL injection, misconfigured databases, insecure storage
Attack Vectors Prompt injection, data poisoning, jailbreaks, adversarial queries Code injection, buffer overflow, cross-site scripting (XSS)
Supply Chain Vulnerabilities Poisoned training data, vector databases that have been tampered with Malicious or outdated dependencies, third-party libraries, CI/CD pipeline risks
Detection & Monitoring Requires real-time logging of inputs/outputs, anomaly detection on generated responses Signature-based intrusion detection, static/dynamic code analysis
Access Controls Context-based access controls (who can see what outputs, under which conditions) Role-based access control (RBAC), least privilege enforcement

Why LLM Risk Management Matters

Gartner predicts that at least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025, with inadequate risk controls cited as a key factor alongside poor data quality, escalating costs, and unclear business value. As these systems become embedded in customer-facing applications, internal workflows, and strategic decision-making processes, the stakes for managing their risks have never been higher.

Regulatory pressures and compliance expectations

Governments are moving to establish guardrails for AI. The EU's AI Act classifies certain AI applications as high-risk and mandates stringent requirements around transparency, documentation, and human oversight. In the United States, executive orders on AI have established reporting requirements for foundation model developers. Sector-specific agencies are issuing guidance for AI use in healthcare, financial services, and employment. 

Then there are expectations arising from industry standards and certification bodies. ISO/IEC 42001 provides the first international standard specifically for AI management systems. As ISO 42001 gains traction, it's becoming a competitive differentiator.

Business risks from exploited or misbehaving models

The business consequences of LLM failures extend far beyond fines. Prompt injection attacks can cause systems to leak sensitive data, or bypass security controls. A single incident can result in immediate financial losses, made worse by expensive incident response efforts.

Operational disruptions also merit consideration. Over-reliance on LLM systems without adequate fallback mechanisms creates fragility in business processes. When models degrade, produce unusable outputs, or become unavailable, organizations lacking proper risk management find their operations grinding to a halt.

Internal governance challenges with AI adoption

Shadow AI, the use of external LLM services without IT approval, is a growing concern as organizations realize their AI footprint is much larger than they had assumed (or wanted it to be). Employees seeking productivity gains turn to public chatbots, often pasting sensitive company data, customer information, or proprietary code into systems where it may be retained, used for training, or inadequately protected.


Establishing clear ownership and accountability for AI projects proves surprisingly difficult. Is the data science team responsible for model behavior, or the business unit deploying it? Who monitors for drift in model performance over time? When problems arise, diffused responsibility leads to slow responses and finger-pointing rather than effective remediation.

Top 10 Enterprise-Relevant LLM Security Risks

  1. Prompt injection and jailbreak techniques
    Attackers manipulate prompts (directly or indirectly) to override safeguards, or exfiltrate data. 
  2. Insecure output handling and hallucinated content
    Poor validation of model outputs can enable XSS, SQL injection, or privilege escalation if hallucinated or manipulated responses are blindly trusted.
  3. Supply chain risks in model dependencies and integrations
    Vulnerabilities arise from third-party models, LoRA adapters, plugins, or poisoned datasets. Enterprises risk importing malicious code or tampered weights from public repositories.
  4. Training data poisoning and model corruption
    Adversaries seed untrusted data into pre-training or fine-tuning pipelines, introducing backdoors, bias, or sleeper-agent behaviors that activate later.
  5. Model theft and reverse engineering
    Excessive or uncontrolled querying can allow adversaries to replicate model functionality, extract embeddings, or reconstruct sensitive training data.
  6. Overreliance on automated LLM outputs
    Blind trust in generated content amplifies risks of misinformation and insecure code suggestions.
  7. Vulnerabilities in plugin and API integrations
    Excessive agency (granting LLMs too much autonomy with tools or plugins), can lead to unintended high-impact actions like data deletion or external system compromise.
  8. Leakage of personally identifiable, sensitive, or proprietary data
    System prompt leakage, embedding inversion, and output disclosure can expose credentials, PII, or business secrets.
  9. Model-generated misinformation or manipulative content
    LLMs may confidently output false or biased content, leading to reputational, legal, and operational harms.
  10. Resource drain, misuse, and consumption abuse
    Unbounded consumption attacks can exploit cost-per-query models, overwhelm infrastructure, or even replicate models at scale.

Recent Real-World Examples of LLM Risks

Air Canada Chatbot Liability (February 2024)

When Air Canada's chatbot falsely promised a bereavement fare discount to a customer booking funeral travel, the airline was held legally liable for the misinformation. This case exemplifies the risk of misinformation, where a model generates incorrect information that users rely upon. The lawsuit established that companies bear responsibility for their AI models’ outputs. 

DeepSeek Database Exposure (January 2025)

DeepSeek left a database publicly accessible online, exposing over a million lines of sensitive log data including chat histories and internal information from more than one million users. The incident underscores that AI providers handle particularly sensitive data deserving stronger-than-usual protection.

Emerging LLM Risk Trends for Enterprises

AI supply chain vulnerabilities and third-party dependencies

Enterprises are increasingly consuming models and plugins from public hubs and vendor ecosystems. That accelerates innovation and shortens development cycles. But it’s also creating a porous AI supply chain.

Recent findings show malicious or booby-trapped models surfacing on public repositories, highlighting real code-execution risks and impersonation abuse that can slip into enterprise pipelines if vetting is weak. 

Adversarial ML attacks targeting model integrity

Model integrity has become a first-order concern as attackers experiment with ways to manipulate or destabilize LLMs. Adversaries are finding methods to alter model behavior that go beyond data poisoning: exploiting model drift, or repurposing outputs in harmful ways. To achieve resilience, enterprises need to be as agile as these attackers, and treat LLMs as living systems that are continually tested and reinforced. 

Increasing sophistication of prompt injection methods

Prompt injection has matured from simple jailbreaks to indirect and hybrid attacks that piggyback on untrusted content and tools. These attacks can also use agent-to-agent exchanges, blending language tricks with classic exploitation to reach data exfiltration or RCE. Because of this, major vendors now ship dedicated controls.

Risks from fine-tuning with unvetted data sources

Fine-tuning on “found” or partner data can quietly poison behavior. OWASP’s LLM Top-10 classifies data/model poisoning as a core risk, and 2024 research on “jailbreak-tuning” shows that even small fractions of tainted data can reliably erode safeguards at scale. Enforce strict curation, chain-of-custody, and reproducible training; run pre-train and post-train evals for safety regressions before promotion to prod.

Agentic Lateral Movement: The Rise of IdentityMesh

Agentic AI architectures, where LLM-powered agents can read, write, and act across multiple systems, introduce new vulnerabilities, too. Lasso’s IdentityMesh research reveals how attackers can exploit the merged identity layer within these systems to perform cross-system operations. By embedding malicious instructions through indirect prompt injection, adversaries can hijack an AI agent’s legitimate access to move laterally across applications like Slack, Notion, or GitHub, exfiltrating data or triggering unauthorized actions along the way. 

Best Practices and Core Components for LLM Security

Best Practice Why It Matters Enterprise Implementation
Input validation and prompt filtering Blocks malicious instructions (prompt injection, data poisoning) before they reach the model.
  • Sanitize queries
  • Enforce whitelists/blacklists
  • Use regex or ML classifiers for malicious inputs
Output monitoring and moderation mechanisms Prevents sensitive data leaks, hallucinations, or policy violations.
  • Deploy automated output filters
  • Flag PII, secrets, or toxic content before user delivery
Role-based access control (RBAC) for LLM interfaces Ensures only authorized users can access sensitive data or privileged model functions.
  • Map LLM actions to user roles
  • Integrate with IAM and MFA
Data encryption (at rest and in transit) Protects training data, embeddings, and outputs from interception or theft.
  • Use AES-256, TLS 1.3
  • Rotate keys regularly with centralized KMS
Restricting plugin use to approved sources Reduces supply chain vulnerabilities and prevents abuse of external tools.
  • Maintain allowlists for plugins/APIs
  • Require signed artifacts and code provenance
LLM-specific threat modeling processes Identifies unique attack vectors (prompt injection, model inversion, output hijacking).
  • Extend STRIDE / ATT&CK to LLM workflows
  • Integrate MITRE ATLAS techniques
Logging, monitoring, and versioning of models Enables forensic analysis after incidents and tracks model drift over time.
  • Centralize logs of inputs/outputs
  • Version model weights, configs, and prompts
Automated alerting on suspicious model behavior Detects real-time anomalies such as exfiltration attempts or prompt bypasses.
  • Configure anomaly detection
  • Trigger alerts via SIEM/SOAR integration
Policy controls to limit prompt and output abuse Enforces guardrails on what the model can accept and return.
  • Apply context-based policies
  • Block queries outside the business scope
  • Enforce safe output schemas

Regulatory and Compliance Considerations for LLMs

Mapping LLM risk controls to global privacy laws

Under frameworks like the GDPR and CCPA, organizations remain responsible for any personal or sensitive information that LLMs use. This includes data surfaced through prompts, logs, or outputs. To align with these obligations, enterprises must apply data minimization, purpose limitation, and privacy-by-design principles to every stage of their LLM lifecycle.

Key compliance controls include:

  • Implementing data classification and masking for prompts and outputs that may contain personal data.
  • Applying context-based access control (CBAC) to ensure only authorized personnel or applications can interact with sensitive datasets.
  • Maintaining clear data processing records to support accountability and meet “right to explanation” or “right to erasure” requests.


NIST and ISO guidance on AI governance

Emerging frameworks like NIST’s AI Risk Management Framework (AI RMF) and ISO/IEC 42001 are shaping a standardized approach to AI governance. Both frameworks emphasize operational transparency, risk-based assessment, and continuous monitoring of AI. For LLM implementations, this means establishing governance policies that explicitly address:

  • Model provenance and version control: Ensuring every deployed model can be traced to its source, configuration, and training data.

  • Access and oversight: Defining clear ownership for LLM operations across security, compliance, and data teams.

  • Continuous risk assessment: Integrating LLM-specific testing (e.g., red-teaming, prompt injection simulation, and data-leak detection) into enterprise audit processes.


Audit trails and explainability requirements

Auditors and regulators increasingly expect AI models and apps to provide traceable, explainable outputs. That’s a major challenge for probabilistic models like LLMs. To meet these expectations, organizations should implement:

  • Comprehensive logging of prompts, responses, and model decisions to support forensic analysis and accountability.
  • Versioned model registries documenting configuration changes, fine-tuning datasets, and updates to security guardrails.
  • Explainability-by-design approaches that record the model’s reasoning chain or reference data sources, particularly in regulated industries like finance, healthcare, and public services.


Incident Response for LLM Security Breaches

Speed matters when an LLM security breach occurs. Incident response needs to be swift, and tailored to the characteristics of the threat in question: prompt injection, data poisoning, or some other LLM-specific attack vector.


Rapid detection and isolation of compromised models

Organizations should implement real-time monitoring and anomaly detection across LLM queries and outputs to flag potential security risks such as data leakage, jailbreaks, or remote code execution attempts. Research shows that attackers increasingly use LLM-specific vulnerabilities like supply chain poisoning and prompt injection to trigger data breaches. Early containment requires automated workflows to isolate affected applications and revoke access controls before threats spread across the enterprise environment.


Forensic analysis of model prompts and responses

Because LLMs operate in a non-deterministic way, traditional application security tools aren’t sufficient. Forensic analysis must include reconstructing user prompts, system prompts, and model responses to identify how adversaries manipulated outputs or extracted sensitive data. Comprehensive logging of inputs and outputs, combined with penetration-test style red teaming, helps security teams retrace attack vectors and prevent recurrence. This is especially important for maintaining compliance with data protection regulations like GDPR or the EU AI Act.


Notification workflows for affected stakeholders

Data privacy frameworks require timely disclosure when breaches involve sensitive data, intellectual property, or personal information. Effective incident response playbooks should include structured communication protocols to notify legal, compliance, and business leaders alongside regulators where required. Recent guidance stresses that failing to notify stakeholders promptly can create not just regulatory penalties but also reputational fallout. Clear notification workflows ensure alignment across security, legal, and compliance teams, minimizing the damage from potential security threats.

Tools for Securing LLMs

Lasso Security’s AI Risk Management Platform

Lasso Security’s AI risk management platform provides continuous monitoring and real-time detection of GenAI-specific threats. By autonomously discovering and tracking all LLM interactions, Lasso creates a unified audit trail and enforces context-based security policies at every touchpoint. The platform integrates with enterprise environments through a single line of code, giving CISOs and compliance teams a clear view of data flow, model behavior, and regulatory exposure.

Prompt Injection Monitoring Solutions

Prompt injection remains one of the most dangerous attack vectors for enterprise LLMs. New-generation monitoring tools are emerging to detect and block malicious instructions hidden in text, documents, or third-party content. These solutions apply semantic filtering, contextual validation, and real-time anomaly detection to intercept potentially harmful prompts before they reach the model.

Data Provenance and Model Lineage Tools

Data provenance and lineage solutions provide transparency into where training and fine-tuning data originates, how it has been modified, and what influence it exerts on model outputs.  

Lineage tracking is critical for mitigating data poisoning and maintaining model integrity across the AI supply chain. Emerging tools integrate with MLOps platforms to automate documentation, detect inconsistencies in datasets, and flag unverified data sources before they enter the training pipeline.

Open-Source GenAI Security Frameworks

The open-source ecosystem is rapidly developing security frameworks designed to protect GenAI applications. Initiatives like OWASP’s LLM Top 10 and Agentic AI risks offer practical controls for prompt validation, policy enforcement, and safe output handling.


How Lasso Secures and Governs Enterprise LLM Deployments

Lasso provides an integrated layer of defense across every point where users, data, and models interact. Its AI risk management platform autonomously discovers and monitors all LLM activity, detecting threats like prompt injection, data leakage, and model misuse in real time.

Lasso’s context-based access controls and policy-driven governance prevent unauthorized data flows and maintain compliance with global privacy standards. By establishing full audit trails of prompts, responses, and user actions, it gives organizations the transparency that regulators now demand, and the operational assurance that security leaders need.

In an environment where traditional software security no longer reaches, Lasso delivers the guardrails that make large language models safe to deploy at scale. Book a demo to start transforming enterprise AI from an unknown risk into a trusted capability.

Seamless integration. Easy onboarding.

Schedule a Demo
cta mobile graphic
Text Link
Elad Schulman
Elad Schulman
Text Link
Elad Schulman
Elad Schulman