Breaking the model - when applications and agents misbehave

It started with a simple prompt and ended in a data breach. Welcome to the wild west of Generative AI, where models hallucinate, guardrails break, and bad actors get creative.
In this session, Martin Miller will go through real-world failures where GenAI went completely off-script. From prompt injections that exposed sensitive data to chatbots manipulated into leaking secrets or writing malicious code, we will unpack what went wrong and why.
As well as Insights on how to build safer, smarter GenAI systems, and how to stop your GenAI from going rogue before it makes headlines.
It started with a simple prompt and ended in a data breach. Welcome to the wild west of Generative AI, where models hallucinate, guardrails break, and bad actors get creative.
In this session, Martin Miller will go through real-world failures where GenAI went completely off-script. From prompt injections that exposed sensitive data to chatbots manipulated into leaking secrets or writing malicious code, we will unpack what went wrong and why.
As well as Insights on how to build safer, smarter GenAI systems, and how to stop your GenAI from going rogue before it makes headlines.
Download now
It started with a simple prompt and ended in a data breach. Welcome to the wild west of Generative AI, where models hallucinate, guardrails break, and bad actors get creative.
In this session, Martin Miller will go through real-world failures where GenAI went completely off-script. From prompt injections that exposed sensitive data to chatbots manipulated into leaking secrets or writing malicious code, we will unpack what went wrong and why.
As well as Insights on how to build safer, smarter GenAI systems, and how to stop your GenAI from going rogue before it makes headlines.