Responsible AI Summit Main Conference Day 1 - Tuesday 23 September


Morning Plenary Session

8:00 am - 8:45 am Registration & Breakfast

8:45 am - 9:00 am Chairs Opening Remarks

9:00 am - 9:30 am Opening Panel Discussion: From Principles to Progress: What’s Actually Changed in Responsible AI This Year?

Anna Zeiter - Chief Privacy Officer, eBay
Paul Dongha - Head of Responsible AI & AI Strategy, NatWest Group
JoAnn Stonier - Executive Vice President, Chief Data Officer, Mastercard

The pace of AI adoption—and regulation—has accelerated dramatically in the past 12 months. As global frameworks mature and generative AI moves from pilot to production, how are organisations adapting their responsible AI strategies in real time? This opening panel brings together senior leaders to reflect on the most meaningful shifts in the last year—from governance structures and tooling to regulatory readiness and internal culture.

·       Responding to evolving AI regulations with practical, cross-functional governance updates

·       Adapting RAI frameworks for the age of GenAI, foundation models, and real-time use cases

·       Moving from awareness to accountability: What’s working, what’s not, and what’s next

Join us for a grounded look at how Responsible AI has moved from theory to action in the past year—and where the pressure points remain.

img

Anna Zeiter

Chief Privacy Officer
eBay

img

Paul Dongha

Head of Responsible AI & AI Strategy
NatWest Group

img

JoAnn Stonier

Executive Vice President, Chief Data Officer
Mastercard

9:30 am - 10:00 am Morning Plenary Keynote – Governing the Unpredictable: Responsible AI in the Age of Agents

As AI systems become more autonomous, organisations face new challenges in governance, security, and ethics. Traditional oversight models struggle to keep pace with AI capable of independent decision-making, raising critical questions about accountability, liability, and risk management. This session explores how enterprises can prepare for the rise of agentic AI while ensuring responsible deployment.

·       Developing governance frameworks for increasingly autonomous AI systems.

·       Implementing technical safeguards to manage risks and ensure accountability.

·       Aligning ethical considerations with business objectives and regulatory expectations.

Slot Reserved for Lead Sponsor Partner   

Amid rapid advancements in AI technology, navigating global regulatory frameworks has become paramount for organisations aiming to balance innovation with compliance. This session explores key developments in AI regulation across major jurisdictions, highlighting the EU AI Act, UK, and US approaches while addressing geopolitical influences and strategies for managing regulatory divergence and uncertainty.

·       Understanding current global AI regulatory landscapes.

·       Navigating regulatory complexities while fostering innovation.

·       Developing strategies to ensure compliance and competitive advantage.

img

Kai Zenner

Head of Office and Digital Policy Adviser for MEP Axel Voss (EPP group)
European Parliament

img

Benedict Dellot

Technology Policy Principal (Generative AI)
Ofcom

img

Tharishni Arumugam

Global AI Compliance Director
Aon

img

Nuala Polo

UK Public Policy Lead
Ada Lovelace Institute

10:30 am - 11:00 am Morning Networking Coffee Break

Morning Streamed Sessions

Governance Stream - Operationalizing AI Governance Frameworks

11:00 am - 11:30 am Presentation – Operationalizing Responsible AI in Energy Management and Industrial Automation: Schneider Electric's Practical Approach to AI Governance
Ania Kaci - Global Leader for Responsible AI, Schneider Electric

As organizations navigate the complex landscape of AI implementation, many struggle with moving beyond theoretical frameworks to practical governance solutions—especially in highly regulated industries like energy. Ania Kaci, Global Leader for Responsible AI at Schneider Electric, brings frontline expertise from building and scaling a cross-functional Responsible AI program across a multinational energy management and industrial automation leader.

Drawing from Schneider Electric's journey developing high-risk AI systems for energy management, predictive maintenance, and industrial automation, Ania will provide actionable insights on:

·       Translating EU AI Act requirements into practical governance frameworks that align with sectoral regulations and business objectives

·       Building effective AI risk assessment methodologies specifically designed for energy and industrial automation applications

·       Creating a thriving Responsible AI community that extends beyond legal and technical teams to drive organization-wide AI literacy

·       Balancing innovation with compliance across global operations and regulatory landscapes

·       Leveraging AI for sustainability goals while addressing AI's own environmental impact

img

Ania Kaci

Global Leader for Responsible AI
Schneider Electric

Governance Stream - Operationalizing AI Governance Frameworks

11:30 am - 12:00 pm Panel Discussion – Centralized vs. Federated AI Governance: Finding the Right Balance
Anastasia Zygmantovich - Global Data Science and Data Visualisation Director, Reckitt
Penny Jones - Responsible AI Lead, Zurich Insurance UK
Martin Woodward - Global Responsible AI Officer, Randstad

When scaling AI governance across global organizations, leaders face a key decision: centralized or federated governance? This session will examine the advantages and challenges of both approaches, providing strategies to determine the optimal model for your organization’s size, complexity, and regulatory environment.

·       Comparing centralized and federated governance models for global AI deployment.

·       Building flexible frameworks that scale with organizational needs and regulations.

·       Implementing governance structures that ensure consistency while allowing for local autonomy.

img

Anastasia Zygmantovich

Global Data Science and Data Visualisation Director
Reckitt

img

Penny Jones

Responsible AI Lead
Zurich Insurance UK

img

Martin Woodward

Global Responsible AI Officer
Randstad

Governance Stream - Operationalizing AI Governance Frameworks

12:00 pm - 12:30 pm Presentation – Session Details to be Announced


Governance Stream - Operationalizing AI Governance Frameworks

12:30 pm - 1:10 pm Roundtable Discussion – AI Governance in Action: A Roundtable Case Study Deep Dive
Hellena Crompton - Data Protection Officer UK&I, dentsu international
Ronnie Chung - Group Head of Responsible AI, Centrica

In this interactive roundtable, you will share and dissect real-world case studies of AI governance in action. By coming prepared with examples from your own organizations, we will engage in a dynamic discussion about the challenges and successes of implementing AI governance frameworks. Together, we will explore key themes such as decision-making structures, roles and responsibilities, and the practicalities of operationalizing governance models at scale.

·       Sharing and critiquing case studies of AI governance frameworks.

·       Discussing challenges and lessons learned from real-world implementations.

·       Identifying best practices for scaling and ensuring accountability across AI initiatives.

img

Hellena Crompton

Data Protection Officer UK&I
dentsu international

img

Ronnie Chung

Group Head of Responsible AI
Centrica

Technical Stream - Observability, Evaluation & Testing of AI Systems

11:00 am - 11:30 am Presentation – From Foundation to Application: Measuring Language- and Culture-Specific Bias in LLM-Powered Applications
Clara Higuera - Responsible AI Lead, BBVA

How do LLMs behave across different languages and cultural contexts — and what does this mean for fairness in real-world deployments? Clara, Responsible AI lead at BBVA, is translating her applied research on multilingual LLM bias into concrete methodologies to evaluate bias in LLM-powered applications. Her work focuses on identifying performance disparities across demographics and vulnerable groups that could lead to discriminatory outcomes.

  • Using control and adversarial testing, as well as cultural probing, to surface hidden biases in LLM-powered applications
  • Adapting open-domain bias metrics to enterprise-specific use cases and decision contexts
  • Integrating fairness assessments into scalable pipelines with sociotechnical oversight
  • Embedding bias evaluations into product workflows to support accountable, trustworthy, human-centered AI


img

Clara Higuera

Responsible AI Lead
BBVA

Technical Stream - Observability, Evaluation & Testing of AI Systems

11:30 am - 12:00 pm Presentation – Securing Sensitive Systems: Data Governance & Information Security for AI in Healthcare
Bernie Pilgram - Lead Product Data & Information Security Governance, Fresenius Medical Care

At Fresenius Medical Care, ensuring patient safety and trust isn’t just a goal—it’s a mandate. As AI becomes embedded in critical systems such as clinical decision tools and operational workflows, the challenge is to ensure patient safety and to protect sensitive personal health data while fostering innovation. Bernie, who drives Product Data & Information Security Governance, will discuss how they address these complex, large-scale challenges, where data privacy, information security, ethics, and regulatory aspects converge with responsible AI deployment.

  • Navigating the intersection of data privacy, information security, ethics, and AI in regulated healthcare environments.
  • Identifying and mitigating bias in datasets and models.
  • Securing sensitive data through governance across dynamic, evolving AI systems
img

Bernie Pilgram

Lead Product Data & Information Security Governance
Fresenius Medical Care

Technical Stream - Observability, Evaluation & Testing of AI Systems

12:00 pm - 12:30 pm Presentation – Session Details to be Announced

Technical Stream - Observability, Evaluation & Testing of AI Systems

12:30 pm - 1:10 pm Roundtable Discussion – Building Effective Monitoring Frameworks for Deployed AI Systems
Andrea Cosentini - Head of Data Science & Responsible AI, Intesa Sanpaolo Bank

Monitoring AI systems post-deployment is key to ensuring their continued efficacy and safety. This session will focus on technical frameworks for tracking AI performance, detecting anomalies, and ensuring compliance with established governance practices. Learn about the tools that enable real-time monitoring and proactive management of deployed AI models.

·       Learning about frameworks for continuous monitoring of AI system performance.

·       Exploring anomaly detection tools to ensure AI systems perform as expected.

·       Understanding compliance and governance monitoring for deployed AI.

img

Andrea Cosentini

Head of Data Science & Responsible AI
Intesa Sanpaolo Bank

Lunch

1:10 pm - 2:05 pm Lunch in the Exhibition Hall: Network with your Peers

Afternoon Streamed Sessions

Governance Stream - Regulatory Compliance & Risk Management

2:05 pm - 2:35 pm Presentation – From Food to Fork: How Nestlé Is Building Global AI Governance in a Non-Tech-Native Enterprise
Laura Perea Virgili - Senior Product Manager of Responsible AI, Nestle

Nestlé is the world’s largest food and beverage company, with 300,000 employees, a digital ecosystem spanning 185 countries, and an AI portfolio powering everything from demand forecasting to food safety. But how does a non-tech-native organisation govern AI responsibly on a global scale? In this session, Laura will share how Nestlé’s hybrid governance model—balancing global oversight with local flexibility—enables responsible, enterprise-wide AI adoption without losing sight of cultural nuance, operational complexity, or ethical risk. This talk will delve into: 

  • Building governance on existing digital foundations: culture, quality, legal alignment, and values
  • Applying a hybrid model: global policy with local adaptation across 185 countries
  • Embedding people, process, and tool-based controls into every stage of AI deployment
img

Laura Perea Virgili

Senior Product Manager of Responsible AI
Nestle

Governance Stream - Regulatory Compliance & Risk Management

2:35 pm - 3:05 pm Presentation – Session Details to be Announced


Governance Stream - Regulatory Compliance & Risk Management

3:05 pm - 3:35 pm Fireside Chat – A Practical Guidebook to AI Compliance
Matthias Holweg - American Standard Companies Professor of Operations Management, Saïd Business School, University of Oxford
Uthman Ali - Global Responsible AI Officer, bp

Session Details to be Announced

img

Matthias Holweg

American Standard Companies Professor of Operations Management
Saïd Business School, University of Oxford

img

Uthman Ali

Global Responsible AI Officer
bp

Technical Stream - AI Safety, Security & Cybersecurity Integration

2:05 pm - 2:35 pm Presentation – Embedding AI Security by Design: Cross-Functional Collaboration at Philip Morris International
Ray Ellis - Head of AI Security, Philip Morris International

As AI becomes embedded in core business and operational functions, safeguarding these systems requires deep collaboration between AI and cybersecurity teams. At Philip Morris International, Ray, Head of AI Security, is leading efforts to ensure security is not an afterthought—but a foundational principle across AI development. This session will share how PMI is building secure-by-design AI practices in a complex, global environment, while aligning with compliance and risk expectations.

  • Building integrated workflows between AI and cybersecurity to address risks early and effectively
  • Operationalising security protocols tailored to AI-specific threats across global infrastructure
  • Aligning security, compliance, and innovation priorities to support responsible AI at scale


img

Ray Ellis

Head of AI Security
Philip Morris International

Technical Stream - AI Safety, Security & Cybersecurity Integration

2:35 pm - 3:05 pm Presentation – Session Details to be Announced

Slot Reserved for Sponsor Partner

Technical Stream - AI Safety, Security & Cybersecurity Integration

3:05 pm - 3:35 pm Panel Discussion – Implementing Guardrails and Content Filtering for Responsible AI Use
Ray Ellis - Head of AI Security, Philip Morris International
Stephanie Cairns - Senior Data Scientist, AI Risk, OVO Energy
Vijay Mitra - Generative AI & Responsible AI Risk Lead, Nationwide Building Society

Establishing guardrails for responsible AI deployment is essential for minimizing risk and ensuring ethical outcomes. This session will cover how to design and implement content filtering mechanisms and establish safeguard protocols that prevent harmful AI behaviour, especially in sensitive or regulated environments.

·       Developing technical guardrails and content filtering for ethical AI deployment.

·       Understanding the importance of regulatory compliance in AI safety design.

·       Implementing mechanisms for proactive monitoring and control of AI outputs.

img

Ray Ellis

Head of AI Security
Philip Morris International

img

Stephanie Cairns

Senior Data Scientist, AI Risk
OVO Energy

img

Vijay Mitra

Generative AI & Responsible AI Risk Lead
Nationwide Building Society

Afternoon Plenary Session

3:35 pm - 4:05 pm Afternoon Refreshment Networking Break

4:05 pm - 4:35 pm Afternoon Plenary Keynote – AI at Full Speed: Balancing Innovation with Responsible Deployment

As AI adoption accelerates, organisations face a critical challenge—how to drive innovation without compromising ethical standards, security, and compliance. The pressure to stay competitive often clashes with the need for responsible implementation, creating tension between speed and safeguards. This session explores strategies for achieving both.

●      Aligning AI innovation with ethical, legal, and business risk considerations.

●      Building governance frameworks that support agility without stifling progress.

●      Fostering a culture where responsibility and innovation go hand in hand.

 

Slot Reserved for Sponsor Partner    

4:35 pm - 5:05 pm Afternoon Plenary Keynote – The Fundamentals of AI Enterprise Governance

Oliver Patel - Enterprise AI Governance Lead, AstraZeneca

Exclusive global book launch

img

Oliver Patel

Enterprise AI Governance Lead
AstraZeneca

5:05 pm - 5:50 pm Afternoon RAI Real Talk Roundtable Session

James Fletcher - Head of Responsible AI, BBC

Ever go to conferences hoping for practical insights that can actually help you do responsible AI better day to day, but come away empty handed after yet another high-level discussion about the EU AI Act? Well this session is for you. It will be interactive, it will be fun, and most of all it will be a chance to get answers to your burning questions about how to survive and thrive as a responsible AI practitioner doing it for real in the fast-changing, complex world of AI..

img

James Fletcher

Head of Responsible AI
BBC

5:50 pm - 5:50 pm Chairs Closing Remarks & End of Conference Day 1

6:00 pm - 8:00 pm Evening Networking Drinks Reception