Responsible AI Summit Main Conference Day 2 - Tuesday 22 September


Morning Opening Session

8:00 am - 8:30 am Morning Networking Breakfast & Coffee

8:00 am - 8:30 am Breakfast Breakout Session- Collective Intelligence: How Should the Responsible AI Community Actually Operate?

Responsible AI cannot succeed as a collection of isolated policies and private frameworks. Yet competitive pressure, liability concerns, regulatory uncertainty, and reputational risk often limit how openly organisations collaborate. As AI systems grow more powerful and interconnected, the real question is no longer whether we need a Responsible AI community, but how that community genuinely wants to function.

This session goes beyond high-level principles to examine operating models, trust mechanisms, shared infrastructure, and practical coordination. Together, participants will confront what it would take to move from fragmented commitments to a durable, transparent, and accountable Responsible AI ecosystem that actually works in practice.
• Sharing risk signals without exposing competitive strategy.
• Building durable cross-sector accountability mechanism.
• Defining shared norms for collaboration across the RAI ecosystem.

8:30 am - 8:40 am Chairs Opening Remarks

AI’s rapid expansion carries a growing environmental footprint, from data centre infrastructure and hardware supply chains to model training, inference, and everyday user behaviour. Yet much of AI’s carbon impact remains opaque, distributed, and poorly measured. As adoption accelerates, organisations must ask a harder question: what is the true sustainability cost of AI, and who owns it?

This panel moves beyond awareness to break down AI’s carbon use spectrum: infrastructure, operations, and behavioural demand. Speakers examine where emissions concentrate, how to distinguish high-value use from low-value usage, and what practical interventions can meaningfully reduce impact without stalling innovation.
• Measuring emissions across the full AI lifecycle.
• Identifying high-impact vs low-value AI usage.
• Embedding sustainability into governance and daily practice.

img

Laura Gunn

Director of ESG Programs
Wiley

img

Paul Dongha

Head of Responsible AI & AI Strategy
NatWest Group

img

Arlette van Wissen

Responsible and Sustainable AI Lead
Philips

9:10 am - 9:40 am Morning Presentation – Session Details To Be Announced


9:40 am - 10:10 am Morning Plenary Keynote – Building an Agentic AI Governance Framework

Oliver Patel - Head of Enterprise AI Governance, AstraZeneca
img

Oliver Patel

Head of Enterprise AI Governance
AstraZeneca

Responsible AI is full of high stakes decisions and uncomfortable trade offs, and this session brings them into the open. Participants will engage with a series of provocative statements on today’s most contentious AI issues, ranging from frontier scaling to synthetic data, alignment risks, liability boundaries, open source governance, model interpretability, and beyond. After hearing each statement, attendees will split into groups, each tackling one hot button topic. Within each cohort, participants will debate both the affirmative and the opposing positions, then present their strongest arguments back to the room.

• Surfacing competing priorities that shape responsible AI decisions.

• Challenging assumptions through structured, balanced debate.

• Bridging technical, ethical, and strategic perspectives for better AI governance.


img

Tessa Darbyshire

Head of AI Governance
Informa

img

Caroline Ellis

AI & Data Ethics Lead
NatWest Group

10:45 am - 11:15 am Morning Coffee Break

Late Morning Session

Stream A – Governance: Culture, Literacy & Foundations

11:10 am - 11:40 am Presentation – Session Details To Be Announced


Stream A – Governance: Culture, Literacy & Foundations

11:40 am - 12:10 pm Morning Plenary Keynote – Agentic AI Governance at MasterCard: A Risk Taxonomy for Agent Autonomy
Richard Boorman - Director of AI Governance, Mastercard

For an AI agent, even more than other systems, “Responsible AI" is indistinguishable from "working correctly. “As a world leader in responsible AI, Mastercard is advancing governance for the next era of autonomous systems. As AI evolves from predictive tools to agentic systems that act independently, governance must evolve with equal clarity and precision. This session discusses the additional risks agents bring above traditional and generative AI, including Mastercard’s practical approach to managing these at scale.

• The case for Responsible Agentic AI.

• Agentic Risks and Risk Taxonomy.

• Developing Trustworthy, Responsible and Safe AI Agents


img

Richard Boorman

Director of AI Governance
Mastercard

Stream A – Governance: Culture, Literacy & Foundations

12:10 pm - 12:40 pm Panel Discussion – Building a Culture: AI Literacy Beyond the Checkbox
James Fletcher - Responsible AI Lead, BBC
Ger Janssen - AI Ethics & Compliance Lead, Philips

AI literacy is the operating layer of responsible AI: shaping how teams interpret insights, exercise judgment, and maintain human oversight as AI becomes embedded in workflows. Moving beyond static adoption metrics, organisations must focus on cultural readiness and measurable capability shifts that influence governance and decision quality. This session explores practical approaches to literacy and culture so that AI deployment strengthens accountability and operational performance rather than simply increasing usage.

• Measuring literacy and decision impact beyond adoption metrics.

• Cultivating culture and shared understanding for responsible outcomes.

• Integrating literacy with operations to strengthen oversight and judgment.


img

James Fletcher

Responsible AI Lead
BBC

img

Ger Janssen

AI Ethics & Compliance Lead
Philips

Stream A – Governance: Culture, Literacy & Foundations

12:40 pm - 1:10 pm Presentation – Rethinking Human-in-the-Loop: Beyond the rubber stamp
David Crelley - Head of Responsible AI and Data, Admiral Group

The EU AI Act mandates human oversight, yet in practice, this often becomes a junior staff becoming an “AI checker” rubber-stamping automated outputs. In this candid session, Dr David Crelley, Head of Responsible AI & Data at Admiral Group, challenges the compliance-driven interpretation of human-in-the-loop and examines why oversight designed for efficiency frequently undermines effectiveness and accountability. He will share how his team is rethinking oversight as proactive engagement, using judge LLMs to do the hard lifting, deliberately introducing friction, cultural ownership, and meaningful intervention points into AI-enabled processes. David offers practical insight into how to design oversight models that create genuine human engagement rather than passive validation.

• Moving from passive validation to active engagement.

• Using LLMs to do the basic checks.

• Designing friction to strengthen human judgement.

• Embedding cultural ownership into AI oversight.


img

David Crelley

Head of Responsible AI and Data
Admiral Group

Stream B – Technical: Data, Evaluation & Readiness

11:10 am - 11:40 am Presentation – Session Details To Be Announced

Stream B – Technical: Data, Evaluation & Readiness

11:40 am - 12:10 pm Panel Discussion – Automating the Data Engine: Making CEOs Actually Invest in Data
Andy Chi - Senior Legal Counsel and Data & AI Governance Lead, SHEIN

Every organisation says data is strategic, yet most still run on brittle pipelines, stale datasets, and underfunded infrastructure. As automation, real time AI systems, and agentic workflows explode, the gap between what models need and what data teams receive is widening fast. In this panel, speakers will unpack what “automated data” truly requires today, how to quantify its ROI, and how to build foundations that make Responsible AI actually possible.

• Quantifying ROI to make data investment unavoidable.
• Automating pipelines for reliable, real time data readiness.
• Building infrastructure that enables safe, scalable Responsible AI.

img

Andy Chi

Senior Legal Counsel and Data & AI Governance Lead
SHEIN

Stream B – Technical: Data, Evaluation & Readiness

12:10 pm - 12:40 pm Presentation – AI-Assisted Coding in Global Market Infrastructure
Martin Koder - Head of Responsible AI, Swift
Talk Details To Be Announced
img

Martin Koder

Head of Responsible AI
Swift

Stream B – Technical: Data, Evaluation & Readiness

12:40 pm - 1:10 pm Breakout Session –Beyond Benchmarks: What Actually Breaks in LLMs (and How to Evaluate It)
Shay Weiss - Head of Ireland’s Engineering, DevOps and Product, Director WBA Digital, Walgreens Boots Alliance

Accuracy scores and leaderboard rankings only tell a small part of the story. In real environments, LLMs can be influenced, manipulated, and pushed into failure modes that standard evaluation simply doesn't capture. This session focuses on how LLMs behave once deployed, especially from a risk and security perspective. Drawing on practical examples such as prompt injection, RAG poisoning, and everyday failure patterns, we'll explore where current evaluation approaches fall short and what organisations should be testing instead. Rather than asking "is the model accurate?", the session reframes the question to: "how does the model behave under pressure, and where does it break?"

• Understanding how LLMs can be influenced and where real risks emerge.

• Identifying failure modes such as prompt injection, RAG poisoning, and hidden vulnerabilities.

• Rethinking evaluation to reflect real usage, adversarial conditions, and enterprise risk.


img

Shay Weiss

Head of Ireland’s Engineering, DevOps and Product, Director WBA Digital
Walgreens Boots Alliance

Lunch

1:10 pm - 2:10 pm Lunch in the Exhibition Hall: Network With Your Peers

1:40 pm - 2:05 pm RAI Lunch Quiz

Early Afternoon Session

Stream A – Governance: Ethical AI

2:00 pm - 2:30 pm Presentation – Humanising AI: Acting Before Regulation Forces Our Hand
Sarah Mathews - Group Responsible AI Manager, The Adecco Group

As AI systems become more conversational, expressive, and agentic, the push to humanise them is accelerating. In China, widespread deployment of highly human-like AI across daily life has already triggered regulatory action aimed at curbing over-anthropomorphism and protecting users from misplaced trust. Europe now has a choice: learn early or regulate late. In this session, Sarah Mathews, Group Responsible AI Manager at Adecco Group, explores what responsible design looks like before AI systems blur the line between tool and perceived actor, and how transparency, human-centricity, and governance must evolve as agentic capabilities scale.

• Anticipating regulatory risk from humanised AI systems.

• Designing transparency into increasingly agentic experiences.

• Protecting human agency as AI embeds into daily life.


img

Sarah Mathews

Group Responsible AI Manager
The Adecco Group

Stream A – Governance: Ethical AI

2:30 pm - 3:00 pm Presentation – Just Because You Can Doesn’t Mean You Should: Innovating Ethically with AI
Roos Brekelmans - Digital Innovation Accelerator, Vattenfall

As AI increasingly shapes core product, operational and strategic decisions, many organisations still treat ethics as a policy artefact or compliance exercise - often detached from the reality of day to day decision-making. Roos Brekelmans, Digital Innovation Accelerator at Vattenfall, explores the ethical, strategic and trust risks created by this gap. Rather than framing ethical AI as a control or governance problem, she reframes it as a practical business discipline - and a conversation skill. Drawing on hands-on innovation experience and academic foundations in technology ethics, this session moves beyond abstract principles to focus on concrete, repeatable practices that teams and leaders can embed directly into their innovation pipelines. Using AI responsibly is positioned not as a brake on progress, but as a lever for resilient, scalable, and strategically sound innovation.

• Embedding ethics into everyday and strategic business decisions.

• Using values to navigate real AI trade-offs under uncertainty.

• Linking responsible innovation directly to enterprise value and trust at scale.


img

Roos Brekelmans

Digital Innovation Accelerator
Vattenfall

Stream A – Governance: Ethical AI

3:00 pm - 3:30 pm Panel Discussion – Forgotten by Automation: The Human Workforce Behind AI
Mark Graham - Professor of Internet Geography, Director of Fairwork, University of Oxford, Fairwork
Krisztina Pinter - Head of Global Service Planning and Data Governance, SONY

As AI systems scale, the human labour powering them, data annotators, content moderators, and crowd workers, becomes operationally invisible. Yet laws such as the Corporate Sustainability Due Diligence Directive make oversight of supply chains a legal obligation, not a reputational choice. Ethical risk in AI no longer sits only within the model; it extends across global labour networks that train, filter, and sustain these systems.

This panel examines why auditing your AI supply chain matters, and what meaningful accountability looks like in practice. Speakers will highlight how standards-based evaluation, like Fairwork AI, can expose hidden labour risks, benchmark working conditions, and create enforceable transparency. The discussion moves beyond awareness to explore due diligence, procurement leverage, and how organisations can align AI deployment with fair work principles under emerging regulatory pressure.

• Exposing hidden labour across AI supply chains.

• Using certification frameworks to benchmark labour standards.

• Embedding due diligence into AI procurement decisions.


img

Mark Graham

Professor of Internet Geography, Director of Fairwork
University of Oxford, Fairwork

img

Krisztina Pinter

Head of Global Service Planning and Data Governance
SONY

Stream B – Technical: Controllability & Emergent Risk

2:00 pm - 2:30 pm Presentation – Adaptive by Design: Evolving the Model Risk Lifecycle for Generative AI at Aviva
Nadeem Chaudhry - Model Risk and AI Governance Lead, Aviva
Roberto Martin-Reguera - Group Model Validation Director, Aviva

Generative AI is accelerating across insurance, bringing new capabilities and new risks to underwriting, claims, and customer operations. In this evolving environment, model risk functions must ensure that governance frameworks, validation processes, and lifecycle controls can adapt without losing rigour.

In this joint session, Roberto Martin Reguera (Model Validation Director) and Nadeem Chaudhry (Model Risk and AI Governance Lead) will share how Aviva is evolving the model risk lifecycle for GenAI. They will discuss how risk assessments and controls have been re designed for GenAI specific behaviours, how automation is being embedded to enhance oversight at scale, and how independent validation approaches have been adapted to reflect the unique characteristics of GenAI models.
They will also share practical challenges, lessons learned, and what must continue to evolve as GenAI becomes embedded across the business.

img

Nadeem Chaudhry

Model Risk and AI Governance Lead
Aviva

img

Roberto Martin-Reguera

Group Model Validation Director
Aviva

Stream B – Technical: Controllability & Emergent Risk

2:30 pm - 3:00 pm Presentation – Trust is the Real Gem: Governing AI at Pandora
Jennifer Cheung - AI Enablement & Responsible AI Data Scientist, Pandora
img

Jennifer Cheung

AI Enablement & Responsible AI Data Scientist
Pandora

Stream B – Technical: Controllability & Emergent Risk

3:00 pm - 3:30 pm Panel Discussion– When Bias Becomes Infrastructure: Governing Generative AI at Scale
Olu Akinyede - Data Privacy, Data Governance and AI Ethics, Aviva
Clara Higuera Cabañes - Responsible AI Program Lead, BBVA
Uthman Ali - Associate Fellow, University of Oxford

As generative AI embeds across enterprise systems, bias becomes structural: shaping decisions, communications, and customer outcomes at scale. What begins as a model limitation can quickly become a systemic risk, reinforced by data provenance, prompt design, feedback loops, and organisational incentives. This panel takes a critical look at bias as a socio-technical challenge spanning the full AI lifecycle. Moving beyond surface-level fairness claims, panellists will examine how to measure structural and emergent bias in production, define meaningful accountability, and govern generative systems that continuously adapt and influence behaviour.

• Examining bias across the AI lifecycle.

• Measuring structural and emergent bias in production.

• Embedding accountability beyond technical fixes.


img

Olu Akinyede

Data Privacy, Data Governance and AI Ethics
Aviva

img

Clara Higuera Cabañes

Responsible AI Program Lead
BBVA

img

Uthman Ali

Associate Fellow
University of Oxford

Afternoon Closing Session

3:30 pm - 4:00 pm Afternoon Networking Refreshment Break

4:00 pm - 4:30 pm Afternoon Fireside Chat – Session Details To Be Announced


4:30 pm - 5:00 pm Afternoon Plenary Keynote – AI In The Loop: Why HITL No Longer Guarantees Agency

Bogdan Vrusias - Global Head of AI and Data Engineering, The Economist

“Human in the loop” emerged as a human centric safeguard: a way to keep people visibly involved in AI workflows and preserve human judgement at critical points. But as AI systems become increasingly agentic and capable of shaping the workflow itself, this model often reduces the human role to validation, monitoring, or procedural compliance. In this session, Bogdan Vrusias, Global Head of AI and Data Engineering at The Economist, examines how HITL can unintentionally narrow human agency rather than reinforce it. He argues for reframing the loop so that AI sits within a broader human process, one grounded in contextual reasoning, editorial discretion, and genuine ownership. The session offers a more honest and future proof approach to ensuring that humans guide the system, not the reverse.

• Revealing how HITL often constrains, rather than expands, human decision space

• Reframing AI as embedded in human processes, not humans inside machine loops

• Designing agentic-era workflows that preserve judgement, context, and real agency


img

Bogdan Vrusias

Global Head of AI and Data Engineering
The Economist

As AI systems become more capable, organisations face a critical strategic question: is automation always the answer? What should be automated, and what should remain human? The line between efficiency and overreach is becoming harder to see, especially as “agentic” systems begin making decisions over time, not just executing predefined tasks. This plenary panel moves beyond technical definitions to examine boundaries. Where does useful automation end and risky autonomy begin? When does delegating decision-making create value, and when does it erode accountability, resilience, or trust? Leaders will explore not just what is possible, but what is appropriate, sustainable, and strategically aligned.

• Defining boundaries between automation and true agency.
• Questioning when automation strengthens or weakens value.
• Designing accountability alongside increasing autonomy.

img

Siân Townson

Head of AI and Data Science
Heathrow

img

Henri Kujala

Global Head of Privacy and Responsible AI by Design
Vodafone

img

Martin Woodward

Director Global Legal & Global Responsible AI Officer
Ranstad

5:30 pm - 5:30 pm Chairs Closing Remarks & End of Conference