Responsible AI Summit Main Conference Day 1 - Monday 21 September


Morning Plenary Session

8:00 am - 8:45 am Registration & Breakfast

img

Ray Eitel-Porter

Senior Research Associate, The Intellectual Forum
Jesus College Cambridge

9:00 am - 9:30 am Opening Panel Discussion – Faster Than We Think, Slower Than We Feel: The Real Pace of AI Adoption

Oliver Patel - Head of Enterprise AI Governance, AstraZeneca

Frontier AI capabilities continue to leap forward, reshaping expectations and recalibrating what “state of the art” means almost monthly. Yet inside most organisations, adoption remains steady, deliberate, and constrained by governance, infrastructure, and readiness. This widening gap creates strategic confusion: are we preparing for a world that’s already here, or bracing for breakthroughs that won’t materialise evenly?

This session aims to disentangle technical progress from real world absorption capacity, clarify where agentic systems may genuinely transform workflows, and offer a pragmatic lens for prioritising what matters amid hype cycles.

• Accelerating capability growth outpacing enterprise and policy absorption curves.

• Harnessing agentic systems where frontier advances drive real operational uplift.

• Prioritising durable shifts while filtering out performative or premature noise.


img

Oliver Patel

Head of Enterprise AI Governance
AstraZeneca

9:30 am - 10:00 am Morning Plenary Keynote – session details to be announced

Talk Details to be Announced   

10:00 am - 10:30 am Morning Plenary Keynote – Responsible AI, Agentic Systems, and What Actually Scales

Daniel Hulme - Chief AI Officer, WPP

AI has moved beyond proof-of-concept, but the gap between ambition and sustained enterprise value remains. Scaling successfully depends not just on the technology itself but on how organisations think about it - cutting through hype, building genuine literacy, and putting the right governance in place so that agentic AI can deliver real commercial outcomes. In this keynote, Dr Daniel Hulme, Chief AI Officer at WPP, will offer a practical framework for thinking about AI and agentic technologies - where the genuine opportunities lie, and where organisations risk being seduced by inflated expectations.

• A practical framework for AI and agentic adoption that cuts through the hype.

• Why governance and literacy are prerequisites for scaling - and how to frame that in terms the C-suite values.

• Augmenting, not replacing using AI and agents to unlock creative and commercial potential.

• The broader implications of these technologies for business and society.


img

Daniel Hulme

Chief AI Officer
WPP

10:30 am - 11:00 am Morning Networking Coffee Break

Morning Streamed Sessions

Governance Stream A - Turning regulation into workable governance through clear accountability, scalable controls, and legal operational alignment.

11:00 am - 11:30 am Presentation – Session Details To Be Announced


Governance Stream A - Turning regulation into workable governance through clear accountability, scalable controls, and legal operational alignment.

11:30 am - 12:00 pm Panel Discussion – When Law Pushes and Governance Pulls: Closing the AI Accountability Gap
Barbara Zapisetskaya - Principal Counsel, EBRD
Kirsten Van der Zwan - Chief Privacy Officer and Global Head of Privacy, AI & Digital Compliance, Signify

AI regulation is accelerating, yet many organisations still struggle to translate legal obligations into effective, day to day governance. Under the EU AI Act and emerging standards frameworks, organisational functions including Legal and Governance are collectively responsible for risk classification, oversight, documentation, and organisational controls, but often approach these duties from different angles. This session explores how Legal and Governance teams can work together in practice: aligning responsibilities, streamlining AI use case triage, and building a shared governance model that stands up to regulatory scrutiny.

• Aligning legal and governance roles through shared ac-countability models

• Improving AI use case filtering to focus oversight where it matters most

• Strengthening collaboration by clarifying what Governance needs from Legal to operationalise compliance


img

Barbara Zapisetskaya

Principal Counsel
EBRD

img

Kirsten Van der Zwan

Chief Privacy Officer and Global Head of Privacy, AI & Digital Compliance
Signify

Governance Stream A - Turning regulation into workable governance through clear accountability, scalable controls, and legal operational alignment.

12:00 pm - 12:30 pm Presentation – Practical Ways to Manage Third Party and Privacy Risk at Scale
Lucia Batlova - Senior Legal Counsel and Privacy & Data Protection Lead, Lenovo

AI driven innovation is accelerating, and with it, a surge of new tools, datasets, and experimental vendors vying for a place in the enterprise. Every promising partnership also carries potential privacy, security, and regulatory exposure. In this session, Lucia Batlova, EMEA Data Protection & Privacy Lead at Lenovo, reveals how teams in a global organization cut through the noise: rapidly assessing high volume third party requests, screening risky AI initiatives without slowing momentum, and enabling safe experimentation at scale. With real world examples, she shows how legal can stay firmly positioned as an innovation accelerator, while keeping risk firmly in check (considering the fragmented global AI landscape & insufficiency of traditional privacy and security frameworks).

• Building scalable screening processes

• Embedding privacy by design into teams

• Automating and operationalising third party oversight


img

Lucia Batlova

Senior Legal Counsel and Privacy & Data Protection Lead
Lenovo

Governance Stream A - Turning regulation into workable governance through clear accountability, scalable controls, and legal operational alignment.

12:30 pm - 1:00 pm Panel Discussion – Embedding EU AI Act Governance: From Regulation to Real-World Implementation
Pranav Gupte - Associate Director, Data and AI Policy, AstraZeneca
Lara Nogueira - Head of Responsible AI & Data Compliance, Ericsson
Gary Brown - Chief Privacy Officer, Westinghouse Electric Company

As the EU AI Act moves into its implementation phase, the focus is shifting from legislative ambition to operational delivery. Organisations must interpret risk classifications, conformity assessments and oversight duties, while aligning internal governance structures. This session offers a clear and practical update on timelines, enforcement trends and what regulators expect.

• Clarifying risk tiers and governance responsibilities.

• Aligning internal controls with supervisory scrutiny.

• Preparing for documentation, audit and enforcement readiness.


img

Pranav Gupte

Associate Director, Data and AI Policy
AstraZeneca

img

Lara Nogueira

Head of Responsible AI & Data Compliance
Ericsson

img

Gary Brown

Chief Privacy Officer
Westinghouse Electric Company

Technical Stream B - Technical approaches to agentic systems, drift + hallucinations, authority

11:00 am - 11:30 am Presentation – Session Details To Be Announced


Technical Stream B - Technical approaches to agentic systems, drift + hallucinations, authority

11:30 am - 12:00 pm Presentation – Designing Guardian Agents: A Taxonomy for Governing Multi-Agent AI Systems
Alessandro Castelnovo - Head of Responsible AI, Intesa Sanpaolo

In this technical case study, Alessandro Castelnovo, Head of Responsible AI at Intesa Sanpaolo, details how the bank designed and operationalised Guardian Agents to govern emerging multi-agent AI ecosystems. He presents a formal taxonomy that combines three operational roles (Reviewers, Monitors, and Protectors) with five structured risk domains: Data Security & Protection; Performance & Reliability; Quality & Compliance; Explainability & Transparency; and Ethical Coordination & Decisioning. The framework clarifies responsibilities, embeds automated safeguards, enables continuous oversight, and supports accountable, resilient orchestration across complex agent networks.

• Formalising operational roles define structured AI oversight mechanisms.

• Creating five risk domains that align governance with concrete control layers.

• Embedding safeguards to enable resilient, accountable multi-agent orchestration.


img

Alessandro Castelnovo

Head of Responsible AI
Intesa Sanpaolo

Technical Stream B - Technical approaches to agentic systems, drift + hallucinations, authority

12:00 pm - 12:30 pm Presentation – Fairness Under Drift: Building Adaptive AI for High-Stakes Domains
Stuart Burrell - Director of AI Research and Innovation, VISA

AI systems in high-stakes domains such as consumer finance must remain reliable as data distributions shift, regulations evolve, and new patterns emerge at scale. In this session, Dr Stuart Burrell, Director of AI Research & Innovation and Dr Maeve Madigan, Research Scientist at Visa share advances in building adaptive AI systems that maintain both performance and fairness in production environments. Drawing on research across fraud detection, credit decisioning, and vision–language models, the session explores three core challenges: how bias can emerge as a collective property in multi-agent systems, how standard test-time adaptation methods may amplify disparities under distribution shift, and how fairness evaluation must account for domain constraints such as extreme class imbalance and the dual goals of protection and service methods that improve fairness without requiring model retraining, demonstrating how principled fairness research can deliver measurable reductions in bias while maintaining performance at global payments scale.

• Bias can emerge collectively in complex multi-agent AI systems.
• Traditional test-time adaptation may worsen disparities under distribution shift.
• Novel adaptation methods and fairness monitoring enables equitable AI at production scale.

img

Stuart Burrell

Director of AI Research and Innovation
VISA

Technical Stream B - Technical approaches to agentic systems, drift + hallucinations, authority

12:30 pm - 1:00 pm Panel Discussion – Who Is the CEO of an Agent? Delegation, Agency & Responsibility
Detlef Nauck - Head of AI & Data Science Research, BT
Olu Akinyede - Data Privacy, Data Governance and AI Ethics, Aviva
Ramin Mobasseri - Head of Agentic AI Delivery, Wells Fargo

As enterprises delegate decisions to autonomous systems, they must confront a deeper question: what does it mean to transfer authority without severing responsibility? Agentic AI challenges traditional notions of control, oversight, and accountability, forcing organisations to redefine human agency in operational terms. This panel explores the philosophical and practical dimensions of delegation, and the skills required to remain meaningfully responsible for systems that act on our behalf.

• Examining delegation without severing human responsibility.

• Redefining agency in autonomous enterprise systems.

• Building skills for accountable human oversight.


img

Detlef Nauck

Head of AI & Data Science Research
BT

img

Olu Akinyede

Data Privacy, Data Governance and AI Ethics
Aviva

img

Ramin Mobasseri

Head of Agentic AI Delivery
Wells Fargo

Lunch

1:00 pm - 2:05 pm Lunch in the Exhibition Hall: Network with your Peers

Afternoon Streamed Sessions

Governance Stream A - Governing agentic autonomy by redesigning oversight, accountability, and operating models at scale.

2:05 pm - 2:35 pm Presentation – The Rise of Agentic AI: Governing Autonomous Systems at Enterprise Scale
Rozemarijn Jens - Senior AI Innovation Lead, Shell

Agentic AI represents the next stage of enterprise AI—moving beyond copilots to systems that can plan, reason, and act within workflows. For organizations like Shell, operating in complex, safety-critical, and highly regulated environments, deploying such systems requires a strong focus on responsible design, governance, and transparency. This session will explore how large enterprises can adopt agentic AI while maintaining trust, accountability, and human oversight.

• Defining Agentic AI: what differentiates agents from traditional chatbots and copilots in enterprise settings.

• Enterprise use cases: how agentic systems can support engineering knowledge, operations, and complex decision-making.

• Responsible autonomy: designing bounded agents with clear scopes, guardrails, and human-in-the-loop oversight.

• Governance and observability: ensuring traceability, auditability, and compliance for AI-driven actions.

• Scaling responsibly: lessons for deploying agentic AI safely across a global organization.


img

Rozemarijn Jens

Senior AI Innovation Lead
Shell

Governance Stream A - Governing agentic autonomy by redesigning oversight, accountability, and operating models at scale.

2:35 pm - 3:05 pm Presentation – Rethinking Human-in-the-Loop: Beyond the rubber stamp
David Crelley - Head of Responsible AI and Data, Admiral Group

The EU AI Act mandates human oversight, yet in practice, this often becomes a junior staff becoming an “AI checker” rubber-stamping automated outputs. In this candid session, Dr David Crelley, Head of Responsible AI & Data at Admiral Group, challenges the compliance-driven interpretation of human-in-the-loop and examines why oversight designed for efficiency frequently undermines effectiveness and accountability. He will share how his team is rethinking oversight as proactive engagement, using judge LLMs to do the hard lifting, deliberately introducing friction, cultural ownership, and meaningful intervention points into AI-enabled processes. David offers practical insight into how to design oversight models that create genuine human engagement rather than passive validation.

• Moving from passive validation to active engagement.

• Using LLMs to do the basic checks.

• Designing friction to strengthen human judgement.

• Embedding cultural ownership into AI oversight.


img

David Crelley

Head of Responsible AI and Data
Admiral Group

Governance Stream A - Governing agentic autonomy by redesigning oversight, accountability, and operating models at scale.

3:05 pm - 3:35 pm Panel Discussion – Data Governance vs AI Governance: Where Accountability Actually Sits
Andi McAleer - Head of Data and AI Governance, Financial Times
Tom Heath - Chief Data and AI Officer, Ward Williams

As organisations scale AI, tensions often emerge between established data governance teams and newly formed AI governance functions. Overlapping mandates can create gaps or unclear accountability. This session takes a practical look at how leading enterprises are defining boundaries, integrating responsibilities, and building operating models that make data and AI governance work together in practice.

• Clarifying mandates between data and AI governance.

• Aligning ownership across the model lifecycle.

• Designing operating models that avoid duplication.


img

Andi McAleer

Head of Data and AI Governance
Financial Times

img

Tom Heath

Chief Data and AI Officer
Ward Williams

Technical Stream B - Engineering explainable, governed AI systems where accountability is designed, automated, and shared across teams.

2:05 pm - 2:35 pm Presentation – From Regulation to Automation: Scaling AI Governance Without Compliance Drift in the Research and Development Area of Novo Nordisk
Per Rådberg Nagbøl - Senior Data & AI Governance Professional, Novo Nordisk

In highly regulated sectors, such as life sciences, regulatory changes create a hidden risk: compliance drift, beyond the usual data drift. Hence, there is a need for practical strategies to keep AI aligned with evolving regulations and operational realities. This session will present the AI governance setup in the Research and Development area of Novo Nordisk. It will cover ideas, challenges, and practical solutions for automating AI governance without correspondingly scaling human labour. The presentation will include how to move from regulation to automatable requirements and processes that enable the use of rule-, chatbot-, and agent-based automation. It will also address how to prevent compliance drift, how to make automation complement human labour, and how to create oversight of AI systems and visualise AI system interdependencies.

img

Per Rådberg Nagbøl

Senior Data & AI Governance Professional
Novo Nordisk

Technical Stream B - Engineering explainable, governed AI systems where accountability is designed, automated, and shared across teams.

2:35 pm - 3:05 pm Presentation – From Black Box to Glass Box: Making AI Outputs Defensible
Pascal Hetzscholdt - Senior Director of AI Strategy and Content Integrity, Wiley

Explainability is often treated as a compliance afterthought; at Wiley, it is a system design requirement. At the scale of one of the world’s largest scholarly publishers, that means AI outputs must be traceable, transparent about what models are trained on, clear on where human intervention sits in the workflow, and explicit about how updates and corrections are managed over time. In this session, Pascal Hetzscholdt, Director of AI Strategy and Content Integrity, provides a candid, technical look at how enabling students and researchers to prompt directly against curated, licensed content materially reduces hallucinations. He will address realities rarely discussed publicly: cost pressures that drive unseen model changes, quality drift when models switch, and the operational discipline required to maintain standards. The session explores how to secure meaningful quick wins while protecting long- term information integrity and strengthening institutional situational awareness.

img

Pascal Hetzscholdt

Senior Director of AI Strategy and Content Integrity
Wiley

Technical Stream B - Engineering explainable, governed AI systems where accountability is designed, automated, and shared across teams.

3:05 pm - 3:35 pm Panel Discussion – Engineering vs Governance: Have We Got It Figured Out?
Disha Mukherjee - Lead Data Engineer, Ford Credit
Graham Ross - Head of Responsible AI, Centrica

As AI systems scale and grow in complexity, the lines between engineering responsibility and governance oversight can blur. This panel explores where frictions emerge, how technical and governance teams can collaborate effectively, and whether current role definitions are fit for purpose. Through real-world examples, the discussion will probe whether enforcing policy-as-code and defining accountability truly resolves tension.

• Assigning clear roles to reduce friction and evolving with system complexity.

• Embedding governance into practice through technical pipelines.

• Integrating monitoring and checks to enforce accountability in real time.


img

Disha Mukherjee

Lead Data Engineer
Ford Credit

img

Graham Ross

Head of Responsible AI
Centrica

Afternoon Plenary Session

3:35 pm - 4:05 pm Afternoon Refreshment Networking Break

Talk Details to Be Announced 

img

Suzanne Brink

Head of Responsible AI
Lloyds Banking Group

img

Alice Genevois

Responsible AI Lead for Consumer Relationships
Lloyds Banking Group

4:35 pm - 5:05 pm Presentation – Session Details To Be Announced


AI transformation is not a technology upgrade; it is a strategic choice about who you are becoming. In a rapidly expanding AI ecosystem, organisations must define direction before scaling adoption, ensuring investments drive measurable value and sustained competitive relevance. This closing panel explores how leaders align AI ambition with profitability, brand visibility, and responsible growth. The conversation moves beyond experimentation to long-term positioning in an increasingly crowded and fast-moving market.

• Defining strategic direction before scaling AI.

• Prioritising measurable value over chasing AI hype.

• Strengthening brand visibility within evolving AI ecosystems.


img

Ray Eitel-Porter

Senior Research Associate, The Intellectual Forum
Jesus College Cambridge

img

Penny Jones

Responsible AI Lead
Zurich Insurance UK

img

Ramy Erfan

VP Business & Technology Enablement
Citi

5:40 pm - 7:40 pm End of Conference Day 1 & Evening Networking Drinks Reception