Responsible AI Summit Main Conference Day 1 - Monday 21 September


Morning Plenary Session

8:30 am - 9:15 am Registration & Breakfast

9:15 am - 9:30 am Chairs Opening Remarks

In boardrooms, AI discussions are no longer abstract, they are charged with urgency, competing priorities, and hard trade-offs. C-suite leaders are navigating pressure to move fast while confronting rising demands for accountability, safety, and trust. This opening panel sets the scene for 2026 and explores the tensions shaping those conversations, how executives are interpreting "Responsible AI," and where alignment, and friction, is emerging at the top.

· Balancing speed with accountability in AI adoption.

· Interpreting Responsible AI across competing executive priorities.

· Managing trade-offs between innovation, risk, and governance.

img

Ajay Chakravarthy

Chief AI Officer
Thales

img

Mahendra Muralidhar

Chief Technology Officer
VML

img

Rebecca Salsbury

Chief Product and Technology Officer
Financial Times

10:00 am - 10:30 am Morning Plenary Keynote – Responsible AI, Agentic Systems, and What Actually Scales

Daniel Hulme - Chief AI Officer, WPP

AI has moved beyond proof-of-concept, but the gap between ambition and sustained enterprise value remains. Scaling successfully depends not just on the technology itself but on how organisations think about it - cutting through hype, building genuine literacy, and putting the right governance in place so that agentic AI can deliver real commercial outcomes. In this keynote, Dr Daniel Hulme, Chief AI Officer at WPP, will offer a practical framework for thinking about AI and agentic technologies - where the genuine opportunities lie, and where organisations risk being seduced by inflated expectations.

• A practical framework for AI and agentic adoption that cuts through the hype.

• Why governance and literacy are prerequisites for scaling - and how to frame that in terms the C-suite values.

• Augmenting, not replacing using AI and agents to unlock creative and commercial potential.

• The broader implications of these technologies for business and society.


img

Daniel Hulme

Chief AI Officer
WPP

10:30 am - 11:30 am Morning Coffee Break and Speed Networking Session

Take a break, recharge, and make some new connections! This fast‑paced networking speed round gives you the chance to meet multiple peers over refreshments, exchange insights, and spark conversations you can continue throughout the event.

Morning Streamed Sessions

Stream A – Governance: Legal, Compliance, and Accountability

11:30 am - 12:00 pm Panel Discussion – When Law Pushes and Governance Pulls: Closing the AI Accountability Gap
Barbara Zapisetskaya - Principal Counsel, EBRD
Kirsten Van der Zwan - Chief Privacy Officer and Global Head of Privacy, AI & Digital Compliance, Signify
Gwen Morvan - Senior Legal Counsel, Synthesia

AI regulation is accelerating, yet many organisations still struggle to translate legal obligations into effective, day to day governance. Under the EU AI Act and emerging standards frameworks, organisational functions including Legal and Governance are collectively responsible for risk classification, oversight, documentation, and organisational controls, but often approach these duties from different angles. This session explores how Legal and Governance teams can work together in practice: aligning responsibilities, streamlining AI use case triage, and building a shared governance model that stands up to regulatory scrutiny.

• Aligning legal and governance roles through shared ac-countability models

• Improving AI use case filtering to focus oversight where it matters most

• Strengthening collaboration by clarifying what Governance needs from Legal to operationalise compliance


img

Barbara Zapisetskaya

Principal Counsel
EBRD

img

Kirsten Van der Zwan

Chief Privacy Officer and Global Head of Privacy, AI & Digital Compliance
Signify

img

Gwen Morvan

Senior Legal Counsel
Synthesia

Stream A – Governance: Legal, Compliance, and Accountability

12:00 pm - 12:30 pm Presentation – Practical Ways to Manage Third Party and Privacy Risk at Scale
Lucia Batlova - Senior Legal Counsel & EMEA Privacy Lead, Lenovo

AI driven innovation is accelerating, and with it, a surge of new tools, datasets, and experimental vendors vying for a place in the enterprise. Every promising partnership also carries potential privacy, security, and regulatory exposure. In this session, Lucia Batlova, EMEA Data Protection & Privacy Lead at Lenovo, reveals how teams in a global organization cut through the noise: rapidly assessing high volume third party requests, screening risky AI initiatives without slowing momentum, and enabling safe experimentation at scale. With real world examples, she shows how legal can stay firmly positioned as an innovation accelerator, while keeping risk firmly in check (considering the fragmented global AI landscape & insufficiency of traditional privacy and security frameworks).

• Building scalable screening processes

• Embedding privacy by design into teams

• Automating and operationalising third party oversight


img

Lucia Batlova

Senior Legal Counsel & EMEA Privacy Lead
Lenovo

Stream A – Governance: Legal, Compliance, and Accountability

12:30 pm - 1:00 pm Panel Discussion – Embedding EU AI Act Governance: From Regulation to Real-World Implementation
Pranav Gupte - Associate Director, Data and AI Policy, AstraZeneca
Lara Nogueira - Head of Responsible AI & Data Compliance, Ericsson
Gary Brown - Chief Privacy Officer, Westinghouse Electric Company
Oriana Medlicott - Responsible AI- EU Lead, Admiral Group

As the EU AI Act moves into its implementation phase, the focus is shifting from legislative ambition to operational delivery. Organisations must interpret risk classifications, conformity assessments and oversight duties, while aligning internal governance structures. This session offers a clear and practical update on timelines, enforcement trends and what regulators expect.

• Clarifying risk tiers and governance responsibilities.

• Aligning internal controls with supervisory scrutiny.

• Preparing for documentation, audit and enforcement readiness.


img

Pranav Gupte

Associate Director, Data and AI Policy
AstraZeneca

img

Lara Nogueira

Head of Responsible AI & Data Compliance
Ericsson

img

Gary Brown

Chief Privacy Officer
Westinghouse Electric Company

img

Oriana Medlicott

Responsible AI- EU Lead
Admiral Group

Stream B – Technical: Designing Governed Agentic Systems

11:30 am - 12:00 pm Presentation / Case Study – Designing Guardian Agents: A Taxonomy for Governing Multi-Agent AI Systems
Alessandro Castelnovo - Head of Responsible AI, Intesa Sanpaolo

In this technical case study, Alessandro Castelnovo, Head of Responsible AI at Intesa Sanpaolo, details how the bank designed and operationalised Guardian Agents to govern emerging multi-agent AI ecosystems. He presents a formal taxonomy that combines three operational roles (Reviewers, Monitors, and Protectors) with five structured risk domains: Data Security & Protection; Performance & Reliability; Quality & Compliance; Explainability & Transparency; and Ethical Coordination & Decisioning. The framework clarifies responsibilities, embeds automated safeguards, enables continuous oversight, and supports accountable, resilient orchestration across complex agent networks.

• Formalising operational roles define structured AI oversight mechanisms.

• Creating five risk domains that align governance with concrete control layers.

• Embedding safeguards to enable resilient, accountable multi-agent orchestration.


img

Alessandro Castelnovo

Head of Responsible AI
Intesa Sanpaolo

Stream B – Technical: Designing Governed Agentic Systems

12:00 pm - 12:30 pm Presentation / Case Study – Fairness Under Drift: Building Adaptive AI for High-Stakes Domains
Stuart Burrell - Director of AI Research and Innovation, VISA

AI systems in high-stakes domains such as consumer finance must remain reliable as data distributions shift, regulations evolve, and new patterns emerge at scale. In this session, Dr Stuart Burrell, Director of AI Research & Innovation share advances in building adaptive AI systems that maintain both performance and fairness in production environments. Drawing on research across fraud detection, credit decisioning, and vision–language models, the session explores core challenges and demonstrates how principled fairness research can deliver measurable reductions in bias while maintaining performance at global payments scale.

• Bias can emerge collectively in complex multi-agent AI systems.

• Traditional test-time adaptation may worsen disparities under distribution shift.

• Novel adaptation methods and fairness monitoring enables equitable AI at production scale.

img

Stuart Burrell

Director of AI Research and Innovation
VISA

Stream B – Technical: Designing Governed Agentic Systems

12:30 pm - 1:00 pm Panel Discussion – Who Is the CEO of an Agent? Delegation, Agency & Responsibility
Detlef Nauck - Head of AI & Data Science Research, BT
Olu Akinyede - Data Privacy, Data Governance and AI Ethics, Aviva
Ramin Mobasseri - Head of Agentic AI Delivery, Wells Fargo

As enterprises delegate decisions to autonomous systems, they must confront a deeper question: what does it mean to transfer authority without severing responsibility? Agentic AI challenges traditional notions of control, oversight, and accountability, forcing organisations to redefine human agency in operational terms. This panel explores the philosophical and practical dimensions of delegation, and the skills required to remain meaningfully responsible for systems that act on our behalf.

• Examining delegation without severing human responsibility.

• Redefining agency in autonomous enterprise systems.

• Building skills for accountable human oversight.


img

Detlef Nauck

Head of AI & Data Science Research
BT

img

Olu Akinyede

Data Privacy, Data Governance and AI Ethics
Aviva

img

Ramin Mobasseri

Head of Agentic AI Delivery
Wells Fargo

Lunch

1:00 pm - 2:00 pm Lunch in the Exhibition Hall: Network with your Peers

Afternoon Streamed Sessions

Stream A – Governance: Governing Autonomy at Scale

2:00 pm - 2:30 pm Presentation / Case Study– The Rise of Agentic AI: Governing Autonomous Systems at Enterprise Scale
Rozemarijn Jens - Senior AI Innovation Lead, Shell

Agentic AI represents the next stage of enterprise AI—moving beyond copilots to systems that can plan, reason, and act within workflows. For organizations like Shell, operating in complex, safety-critical, and highly regulated environments, deploying such systems requires a strong focus on responsible design, governance, and transparency. This session will explore how large enterprises can adopt agentic AI while maintaining trust, accountability, and human oversight.

• Defining Agentic AI: what differentiates agents from traditional chatbots and copilots in enterprise settings.

• Enterprise use cases: how agentic systems can support engineering knowledge, operations, and complex decision-making.

• Responsible autonomy: designing bounded agents with clear scopes, guardrails, and human-in-the-loop oversight.

• Governance and observability: ensuring traceability, auditability, and compliance for AI-driven actions.

• Scaling responsibly: lessons for deploying agentic AI safely across a global organization.


img

Rozemarijn Jens

Senior AI Innovation Lead
Shell

Stream A – Governance: Governing Autonomy at Scale

2:30 pm - 3:00 pm Presentation / Case Study – From Regulation to Automation: Scaling AI Governance Without Compliance Drift in the Research and Development Area of Novo Nordisk
Per Rådberg Nagbøl - Senior Data & AI Governance Professional, Novo Nordisk

In highly regulated sectors, such as life sciences, regulatory changes create a hidden risk: compliance drift, beyond the usual data drift. Hence, there is a need for practical strategies to keep AI aligned with evolving regulations and operational realities. This session will present the AI governance setup in the Research and Development area of Novo Nordisk. It will cover ideas, challenges, and practical solutions for automating AI governance without correspondingly scaling human labour. 

• moving from regulation to automatable requirements and processes that enable the use of rule-, chatbot-, and agent-based automation.

• preventing compliance drift.

• making automation complement human labour.

• creating oversight of AI systems and visualise AI system interdependencies.


img

Per Rådberg Nagbøl

Senior Data & AI Governance Professional
Novo Nordisk

Stream A – Governance: Governing Autonomy at Scale

3:00 pm - 3:30 pm Panel Discussion – Data Governance vs AI Governance: Where Accountability Actually Sits
Tom Heath - Chief Data and AI Officer, Ward Williams
Penny Jones - Responsible AI Lead, Zurich Insurance

As organisations scale AI, tensions often emerge between established data governance teams and newly formed AI governance functions. Overlapping mandates can create gaps or unclear accountability. This session takes a practical look at how leading enterprises are defining boundaries, integrating responsibilities, and building operating models that make data and AI governance work together in practice.

• Clarifying mandates between data and AI governance.

• Aligning ownership across the model lifecycle.

• Designing operating models that avoid duplication.


img

Tom Heath

Chief Data and AI Officer
Ward Williams

img

Penny Jones

Responsible AI Lead
Zurich Insurance

Stream B – Technical: Explainability, Control & Engineering Reality

2:00 pm - 2:30 pm Presentation / Case Study – From Black Box to Glass Box: Making AI Outputs Defensible
Pascal Hetzscholdt - Senior Director of AI Strategy and Content Integrity, Wiley

Explainability is often treated as a compliance afterthought; at Wiley, it is a system design requirement. At the scale of one of the world's largest scholarly publishers, that means AI outputs must be traceable, transparent about what models are trained on, clear on where human intervention sits in the workflow, and explicit about how updates and corrections are managed over time. In this session, Pascal Hetzscholdt, Director of AI Strategy and Content Integrity, provides a candid, technical look at how enabling students and researchers to prompt directly against curated, licensed content materially reduces hallucinations.

• Addressing cost pressures that drive unseen model changes.

• Securing meaningful quick wins while protecting long- term information integrity.

• Strengthening institutional situational awareness.

img

Pascal Hetzscholdt

Senior Director of AI Strategy and Content Integrity
Wiley

Stream B – Technical: Explainability, Control & Engineering Reality

2:30 pm - 3:00 pm Presentation – AI Governance Meets Security: Building an Organisational Model
Robin Schoss - Director, AI Governance and Head of Information Security EMEA, Olympus

As AI adoption accelerates, governance and cybersecurity teams can no longer operate in silos. In this session, Robin Schoss, Head of AI Governance and Information Security, shares a deep dive into how Olympus structures collaboration between AI governance and security functions. This session takes a practical look inside the organisational model, how responsibilities are split across governance, security, legal, and privacy teams, and where coordination challenges and emerge. Robin offers a candid view of the pain points, trade-offs, and operational practices involved in embedding cybersecurity into AI oversight at enterprise scale.

• Olympus organisational model for AI governance and cybersecurity.
• Key collaboration challenges between governance and security teams.
• Embedding security controls into enterprise AI governance processes.

img

Robin Schoss

Director, AI Governance and Head of Information Security EMEA
Olympus

Stream B – Technical: Explainability, Control & Engineering Reality

3:00 pm - 3:30 pm Panel Discussion – Engineering vs Governance: Have We Got It Figured Out?
Disha Mukherjee - Lead Data Engineer, Ford Credit
Graham Ross - Head of Responsible AI, Centrica
Kateryna Popova - Senior AI Data Governance Professional, Just Eat Takeaways.com

As AI systems scale and grow in complexity, the lines between engineering responsibility and governance oversight can blur. This panel explores where frictions emerge, how technical and governance teams can collaborate effectively, and whether current role definitions are fit for purpose. Through real-world examples, the discussion will probe whether enforcing policy-as-code and defining accountability truly resolves tension.

• Assigning clear roles to reduce friction and evolving with system complexity.

• Embedding governance into practice through technical pipelines.

• Integrating monitoring and checks to enforce accountability in real time.


img

Disha Mukherjee

Lead Data Engineer
Ford Credit

img

Graham Ross

Head of Responsible AI
Centrica

img

Kateryna Popova

Senior AI Data Governance Professional
Just Eat Takeaways.com

Afternoon Plenary Session

3:30 pm - 4:00 pm Afternoon Refreshment Break


Scaling Responsible AI is moving from theory to execution, as organisations look to embed governance directly into how AI is designed and deployed at scale. Alice Genevois and Suzanne Brink explore how organisations can scale AI responsibly through a central and business-led operating model. The session looks at what strong AI use cases look like from a Responsible AI perspective, and how to move beyond one-off governance reviews into reusable, embedded patterns that support day-to-day delivery. It also examines how central expertise and business ownership combine to enable innovation that is trusted, compliant, and scalable.

• Responsible AI scales through clear business ownership, enabled by strong central standards

• Reusable patterns and embedded controls accelerate delivery without increasing risk

• Effective partnership between central and business teams is critical to trust and accountability


img

Suzanne Brink

Head of Responsible AI
Lloyds Banking Group

img

Alice Genevois

Responsible AI Lead for Consumer Relationships
Lloyds Banking Group

4:30 pm - 5:00 pm Afternoon Closing Panel Discussion – Faster Than We Think, Slower Than We Feel: The Real Pace of AI Adoption

Hellen Beveridge - Head of AI Governance & Ethics, AXA UK
Ramy Erfan - VP Business & Technology Enablement, Citi
Oliver Patel - Head of Enterprise AI Governance, AstraZeneca

Frontier AI capabilities continue to leap forward, reshaping expectations and recalibrating what "state of the art" means almost monthly. Yet inside most organisations, adoption remains steady, deliberate, and constrained by governance, infrastructure, and readiness. This widening gap creates strategic confusion: are we preparing for a world that's already here, or bracing for breakthroughs that won't materialise evenly? This session aims to disentangle technical progress from real‑world absorption capacity, clarify where agentic systems may genuinely transform workflows, and offer a pragmatic lens for prioritising business value amid hype cycles.

·       Accelerating capability growth outpacing enterprise and policy absorption curves.

·       Harnessing agentic systems where frontier advances drive real operational uplift.

·       Prioritising durable shifts while filtering out performative or premature noise.

img

Hellen Beveridge

Head of AI Governance & Ethics
AXA UK

img

Ramy Erfan

VP Business & Technology Enablement
Citi

img

Oliver Patel

Head of Enterprise AI Governance
AstraZeneca

5:00 pm - 7:00 pm End of Conference Day 1 & Evening Networking Drinks Reception