Conference Day Two - Wednesday 14th April
This agenda is subject to change.
The EU AI Act is setting the pace for AI regulation, but it is far from the only framework shaping enterprise adoption. The UK has opted for a more sector-led model, the US is layering federal guidance with state-level rules, and governments worldwide are weighing innovation against oversight. For global organisations, the challenge is clear: how to operationalise compliance across multiple jurisdictions while still driving competitive advantage. This panel explores the state of play, key risks, and practical steps for building resilient governance.
• Unpacking the EU AI Act and its implications for high-risk systems
• Comparing approaches across the UK, US, and other major jurisdictions
• Building agile compliance strategies that enable global AI innovation at scale
Leave with clarity on regulatory trends and actionable insights for cross-border compliance.
In a publicly funded organisation like the BBC, innovating at the speed of AI requires more than enthusiasm - it demands rigour, vision, and systems that can turn creativity into capability. As Head of Generative AI, Jon is helping the BBC establish a GenAI Hub: a function designed to enable safe, meaningful and scalable adoption of generative AI across editorial, production, and operations.
This session will explore what it takes to move from isolated experiments to organisation-wide capability. Using examples such as the Bitesize AI Tutor prototype, as well as emerging work in journalism and production, Jon will delve into case studies and unpack how the BBC is building the frameworks, partnerships, and governance needed to deliver value responsibly.
Key themes:
• From pilots to platforms: how early prototypes inform strategy without overpromising product launches
• Unlocking value in the newsroom and production: incubating agentic tools for journalism, content creation, and cross-platform efficiency
• Balancing safety and speed: lightweight triage and risk processes that enable rapid but responsible experimentation
As AI systems become increasingly autonomous, governance cannot remain static. This panel brings together leaders from highly regulated industries to explore how they are embedding dynamic guardrails into AI operations, ensuring safety and accountability without stalling innovation.
• Operationalising governance for multi-agent and self-directed workflows
• Meeting evolving regulatory standards across global jurisdictions
• Balancing innovation speed with ethical and compliance frameworks
Learn how to design governance models that evolve with the pace of AI autonomy.
As GenAI becomes embedded into financial services, the challenge isn’t just innovation - it’s implementation at scale, with trust, tooling and transparency. At Mastercard, Adeline is driving the development of evaluation toolkits and guardrails that support safe experimentation across the enterprise. From evaluating both traditional and generative models to preparing for the EU AI Act, her team is building the platforms and processes that enable product teams to explore agentic commerce and next-gen GenAI use cases - without compromising on trust, tone, or security.
• Building automated toolkits for model evaluation, bias detection and GenAI guardrails
• Enabling safe exploration of agentic commerce use cases across digital product teams
• Aligning AI governance and platform strategy across Chief Data Office and AI CoE
This panel brings together industry leaders and compliance experts to discuss what governance will look like as AI becomes autonomous - and how organisations can harmonise policies across global markets.
• Understanding regulatory trends shaping the next five years
• Aligning enterprise governance with multi-region compliance demands
• Preparing for cross-industry collaborations on responsible AI standards
Leave with a clear view of the regulatory horizon and practical steps to future-proof your governance strategy.
With dozens of AI agents built across BP’s functions - from data retrieval to task orchestration - the challenge isn’t whether the technology works, it’s how to make it usable, safe and enterprise-ready. Natalia leads work on BP’s internal Multi-Agent Control Platform (MCP) and emerging agent-to-agent (A2A) protocols, helping unify fragmented agent systems into cohesive, business-ready workflows. This session shares lessons on scaling adoption, governance, and data integration.
• Designing agent-to-agent systems for secure, predictable enterprise workflows at scale
• Solving for adoption: architecture, data readiness, and investment in business-side change
• Building practical guardrails and evaluation methods for real-world agentic reliability
As AI adoption accelerates, so does its environmental impact. At the Metropolitan Police, Johnny is helping to shape a future where sustainability is woven into enterprise architecture - connecting emerging AI capabilities to the realities of energy use, cloud consumption, and carbon impact. As a member of the Green Software Foundation and a leading voice in the UK’s push for Green AI standards, he offers a timely and important perspective: how public institutions can pursue innovation responsibly, through architectures that support both intelligence and impact reduction.
• Connecting AI strategy to carbon reduction: why Green AI must be part of the plan
• Designing for sustainability: cloud, compute and governance choices that reduce emissions
• Aligning with Green Software Foundation standards to future-proof responsible public AI
AI investment is accelerating but proving a tangible near-term business value remains a challenge. This panel brings together enterprise AI leaders to unpack how to garner continuous internal buying, collaborate across non-technical functions, and position AI as a core business enabler - not just a technical experiment.
• Defining and measuring the real impact of AI investments: cost, productivity, revenue, and beyond
• Securing executive support and funding for AI products in competitive prioritisation environments
• Communicating value to non-technical stakeholders and navigating organisational AI illiteracy
• Embedding AI thinking across product, operations, and business strategy
• Evolving your role from technical lead to enterprise AI change agent
The AA has been steadily building its capabilities in generative and agentic AI, testing use cases across insurance, claims, HR, finance, and customer operations. Alongside this, a Centre of Excellence has been established to provide governance, share learnings, and guide adoption. This session highlights how learning and governance are driving scale.
• Exploring early GenAI and agentic AI use cases across core business functions
• Establishing a Centre of Excellence to coordinate adoption and drive best practice
• Capturing lessons learned to prepare for enterprise-wide scale-up next year
Leave with practical insights into how the AA is turning experimentation into enterprise-scale impact.
GenAI in clinical development must be governed, repeatable, and auditable. Bryan shares Thermo Fisher’s scaling approach - an isolated enterprise LLM, ring-fenced patient-impacting workflows, and a ruthless ROI path - applied across ~3,000 global clinical trials each year.
• Segmenting patient-impacting vs business tasks; apply stricter controls where harm possible exists.
• Operating isolated enterprise LLM with auditable prompts, retrieval, outputs, and lineage tracking.
• Prototyping small, fail fast; scale only with ROI and regulatory acceptance evidence.
A compliance-first blueprint you can defend to MHRA/FDA - controls, evaluation, documentation - and a funding cadence that prioritises value over novelty.
GenAI is transforming critical sectors, from personalised learning to clinical research. This panel highlights real-world deployments and the strategies required to deliver impact responsibly.
• Scaling AI in public services without widening digital inequality
• Measuring social impact beyond commercial KPIs
• Partnering across ecosystems to accelerate innovation in high-impact sectors
Leave with practical insights into using AI as a force for positive societal change.
As financial institutions race to adopt GenAI, the biggest risks aren’t just in the models - they’re in how those models are applied. At Danske Bank, Khaled is building a specialist model evaluation function to do just that: connect technical robustness to practical deployment, bridging the gap between regulation, risk and real-world GenAI use cases. With a mandate to evaluate GenAI applications within a three-lines-of-defence framework, and to interpret evolving EU AI Act requirements, this session shares a first-mover perspective on what responsible GenAI adoption really looks like in modern banking.
• Evaluating GenAI applications - not just models - within enterprise risk frameworks
• Interpreting and applying EU AI Act principles to real-world GenAI deployments
• Building a new kind of MRM function: beyond credit models, into AI-enabled operations
For many financial services firms, the biggest barrier to GenAI deployment isn’t capability - it’s compliance. In response, the Financial Conduct Authority has launched AI in Live Testing: a first-of-its-kind regulatory initiative that allows firms to trial real-world GenAI use cases with consumers, under supervision and with clear assurance mechanisms. In this session, Ed shares early insights from the programme, and how the FCA is enabling innovation with guardrails, helping firms move from POC to production with confidence, not caution.
• Enabling safe GenAI deployment through the FCA’s supervised ‘Live Testing’ environment
• Supporting firms in building scalable assurance and governance from the start
• Learning from live implementations: what’s working, what’s not, and what’s next
At BMW, every car has a story - from production to end-of-life. Ahmed leads efforts to capture and orchestrate that story through modular digital twins, designed to power internal use cases across compliance, automation, battery tracking, and real-time monitoring. From a proprietary digital twin platform that integrates AI workflows, to use cases spanning regulatory readiness and software update diagnostics, this session explores how BMW is operationalising its data infrastructure - turning every vehicle into a living, evolving data product.
• Building scalable digital twin architecture from production to post-lifecycle automation
• Embedding AI across quality checks, compliance, battery tracking and software monitoring
• Designing modular data products for internal teams to self-serve and scale use cases
With a rich background in research, academia, startups and at AWS, Bogdan joined The Economist to do something bold: build AI infrastructure where none existed. In this session, he shares how he's helped transform a landscape of fragmented ideas into a functioning AI platform - reducing model deployment time from six months to minutes, automating ML workflows, and preparing the organisation to responsibly integrate generative and agentic AI. It’s not just about models - it’s about people, processes and a culture that can evolve as fast as the tech.
• Operationalising GenAI: moving from experimentation to production-ready infrastructure
• Building sustainable, scalable AI workflows that balance agility with governance
• Managing culture change: enabling teams without rushing into every shiny new tool
As generative AI reshapes how we search, consume and interpret information online, the need for reliable attribution and effective labelling is more urgent than ever. At Ofcom, Jess is leading research into how users respond to AI-generated content - from watermarks and metadata to annotations like Community Notes - and how these measures influence trust, interpretation, and onward sharing. Drawing on insights from Ofcom’s Deepfake Defences 2 report, this session explores what’s working, what’s not, and what platforms, policymakers and users need to know.
• Evaluating watermarking and metadata: how robust are they to basic manipulation?
• Analysing user response to AI labels and annotations across different content formats
• Understanding how chatbots and GenAI are shaping online search and fact-checking behaviour
As Chief Technology Officer at V.Group, Jeremy is leading efforts to harness AI and GenAI across one of the world’s most complex and global industries. With vast proprietary datasets and operations spanning crews at sea to on-shore command centres, the shipping sector presents both enormous opportunities and unique challenges for AI adoption. Jeremy will share how V.Group is approaching innovation in this space - from early proofs of concept using specialised models to unlock unstructured data, to predictive systems that transform compliance and risk management.
• Exploring how shipping’s complexity shapes AI opportunities and challenges at scale.
• Moving from proofs of concept to applied AI that delivers measurable operational impact.
• Building data access and trust across crews, shore operations, and external stakeholders.
Generative AI was the first leap, autonomy is the second - so what’s next? This keynote offers a forward-looking view on neuro-symbolic AI, human-AI symbiosis, and the policy, compute, and ethics challenges shaping the next decade.
• Tracking emerging models and architectures beyond LLMs
• Preparing for convergence across AI, IoT, and intelligent automation
• Anticipating next-generation risks, from sustainability to weaponisation
Leave with a long-term perspective to guide enterprise investment and innovation strategies.