Agentic AI doesn't just automate tasks - it makes decisions, executes transactions, and acts on behalf of customers at speed and scale. That's a competitive advantage. It's also a largely unmapped attack surface.
This session examines what happens when autonomous agents go wrong - through adversarial manipulation, unexpected failure modes, or actions that fall outside the boundaries anyone thought to define. The panel moves beyond threat awareness to what effective guardrails look like in production: how they're designed, where they break down, and who owns them when they do.
Mapping the agentic abuse surface: prompt injection, credential misuse, and actions that are technically permitted but commercially damaging
Designing controls that constrain agent behaviour without killing the autonomy that made deployment worthwhile
Establishing accountability when an agent - not a human - initiates a transaction that goes wrong
Check out the incredible speaker line-up to see who will be joining Robert.
The browser you are using is not supported that will prevent you from accessing certain features of the website. We want you to have the best possible experience. For this you'll need to use a supported browser and upgrade to the latest version.