Responsible AI Summit Main Conference Day 2 - Wednesday 24 September


Morning Plenary Session

8:00 am - 8:40 am Networking Morning Breakfast & Coffee

8:40 am - 8:45 am Chairs Opening Remarks

As AI continues to reshape industries, the EU AI Act’s emphasis on AI literacy has made it a critical priority for enterprises. Ensuring that employees are well-versed in AI technologies isn’t just about compliance—it's essential for fostering innovation, mitigating risk, and building trust in AI systems. Organisations must invest in upskilling their workforce to ensure a smooth transition into an AI-powered future while adhering to new regulations.

·       Equipping employees with AI literacy to meet EU AI Act mandates and compliance standards.

·       Reskilling and upskilling talent for an AI-powered, regulated workforce.

·       Building a culture of collaboration where human expertise and AI complement one another.

img

Oliver Patel

Enterprise AI Governance Lead
AstraZeneca

img

Carol Wilson

AI Ethics and Governance Fellowship of Information Privacy
Royal London

img

Dara L. Sosulski

Managing Director, Head of Artificial Intelligence and Model Management
HSBC

9:15 am - 9:45 am Morning Keynote Presentation – Session Details to be Announced

Slot Reserved for Sponsor Partner

8:00 am - 8:30 am Morning Roundtable Discussion – Interactive Breakouts: Responsible AI ‘Hive Mind’

Myrna Macgregor - AI Risk Lead, OVO Energy

Bring your real-world responsible AI challenges, and tap into the collective intelligence of your peers for innovative solutions and fresh perspectives.

Key Takeaways:

  • Gain diverse insights and practical suggestions for tackling live problems or challenges.
  • Expand your network and learn from the experiences and expertise of fellow practitioners.
img

Myrna Macgregor

AI Risk Lead
OVO Energy

10:30 am - 11:00 am Morning Coffee Networking Break

Morning Streamed Sessions

Governance Stream – AI For Good

11:00 am - 11:30 am Presentation – Operationalising Responsible AI for Critical Infrastructure Resilience
Ronnie Chung - Group Head of Responsible AI, Centrica

As energy systems digitise and GenAI adoption accelerates, critical infrastructure operators face new regulatory scrutiny, cyber threats, and resilience risks. At Centrica, building a Responsible AI Framework has been key to scaling innovation while safeguarding operations, customers, and society. Attend this talk to understand how Ronnie and his team are:

·       Embedding AI governance across GenAI, ML, and critical energy infrastructure systems

·       Aligning risk tiering with regulatory, cyber, and environmental resilience expectations

·       Translating responsible AI principles into action across complex, distributed operations


img

Ronnie Chung

Group Head of Responsible AI
Centrica

Governance Stream – AI For Good

11:30 am - 12:00 pm Presentation – AI for a Sustainable Future: Driving Decarbonisation While Reducing AI’s Own Footprint
Myrna Macgregor - AI Risk Lead, OVO Energy

As one of the UK’s leading energy providers, OVO Energy is focused on enabling a greener future—and that includes how it uses AI. In this session, Myrna Macgregor, AI Risk Lead, explores the dual challenge of using data and AI to support decarbonisation, while also addressing the hidden environmental costs of AI itself. From energy optimisation to model accountability, discover how a responsible approach to AI can serve both innovation and sustainability goals.

  • Applying AI and data-driven insights to accelerate net-zero initiatives across energy systems
  • Tracking, measuring, and managing the carbon impact of enterprise AI development and deployment
  • Implementing practical strategies to reduce AI’s environmental footprint without slowing progress
img

Myrna Macgregor

AI Risk Lead
OVO Energy

Governance Stream – AI For Good

12:00 pm - 12:30 pm Presentation – Session Details to be Announced


Governance Stream – AI For Good

12:30 pm - 1:00 pm Presentation – Framework to Function: Scaling Responsible AI with Tools, Governance, and Trust in the Development Area of Novo Nordisk
Per Rådberg Nagbøl - Senior Data & AI Governance Professional, Novo Nordisk

As AI adoption accelerates across industries, ensuring responsible, scalable, and consistent deployment is more critical than ever—especially in highly regulated sectors like pharma. At Novo Nordisk, responsible AI is not a siloed initiative but combines collective effort across the enterprise with area and domain specific requirements. From aligning with evolving legislation like the EU AI Act to making it easier for practitioners to comply, this session explores how practical tooling, a trustworthy AI council, and other strong AI governance can turn frameworks into action.

  • Establishing tools that embed compliance into AI workflows and decisions
  • Building governance structures that scale with enterprise-wide AI adoption
  • Treating AI as collaborative work, not standalone machine decision
img

Per Rådberg Nagbøl

Senior Data & AI Governance Professional
Novo Nordisk

Technical Stream – Technical Safeguards & Agentic AI

11:00 am - 11:30 am Presentation – Engineering “Talk to Data”: Building Semantic Foundations for Responsible AI at National Scale
Masood Alam - Chief Data Architect, The Scottish Government

As AI adoption grows, so does the need for machines—and humans—to understand data in context. In the Scottish Government, Masood and the Digital Directorate’s technical team are architecting the data foundations that make responsible, explainable AI possible. This session explores the hands-on work behind enabling AI systems to interact with public sector data through intelligent tagging, synonym resolution, and semantic modelling. Learn how knowledge graphs, metadata enrichment, and NLP interfaces are unlocking scalable, governed access to complex datasets—without compromising on accountability.

·       Architecting semantic layers using knowledge graphs to power explainability and data lineage

·       Automating tagging, classification, and synonym resolution across siloed, federated data sources

·       Enabling NLP-driven “talk to data” interfaces to widen access without exposing risk

img

Masood Alam

Chief Data Architect
The Scottish Government

Technical Stream – Technical Safeguards & Agentic AI

11:30 am - 12:00 pm Panel Discussion – Testing and Verifying Agentic AI: Methodologies for Safety and Control
Paul Dongha - Head of Responsible AI & AI Strategy, NatWest Group
Dara L. Sosulski - Managing Director, Head of Artificial Intelligence and Model Management, HSBC

Ensuring the safety of agentic AI systems requires thorough testing and verification methods. This session will dive into the best practices for testing autonomous AI, focusing on verification protocols that ensure the systems behave predictably and ethically under all conditions.

·       Exploring testing methodologies for ensuring the safety of autonomous AI systems.

·       Implementing verification processes that guarantee compliance with governance standards.

·       Addressing the challenges of testing agentic AI and its ethical implications.

img

Paul Dongha

Head of Responsible AI & AI Strategy
NatWest Group

img

Dara L. Sosulski

Managing Director, Head of Artificial Intelligence and Model Management
HSBC

Technical Stream – Technical Safeguards & Agentic AI

12:00 pm - 12:30 pm Presentation – Session Details to be Announced

Technical Stream – Technical Safeguards & Agentic AI

12:30 pm - 1:00 pm Presentation - Guardrailing Generative AI at Scale: Intesa Sanpaolo’s Technical Approach to Mitigating Risk
Alessandro Castelnovo - Head of Responsible AI – Data Science & Responsible AI, Intesa Sanpaolo

As generative AI rapidly integrates into banking workflows—from customer service chatbots to advisory copilots—the risks of hallucination, toxicity, privacy violations, and regulatory non-compliance are rising. Intesa Sanpaolo, one of Europe’s largest banking groups, is tackling this head-on. In this session, Alessandro, Head of Responsible AI, shares how his team is designing and implementing technical guardrails around generative AI models, with a sharp focus on risk mapping, prompt injection protection, and fundamental rights impact assessments.

  • Building proactive defences against prompt injection and out-of-context content generation
  • Implementing toxicity filters and privacy safeguards in systems leveraging external LLMs
  • Operationalizing EU AI Act principles through guardrails and continuous risk monitoring

Alessandro Castelnovo, Head of Responsible AI, Intesa Sanpaolo

img

Alessandro Castelnovo

Head of Responsible AI – Data Science & Responsible AI
Intesa Sanpaolo

Lunch

1:00 pm - 2:00 pm Lunch in the Exhibition Hall: Network With Your Peers

Afternoon Streamed Sessions

Governance Stream – AI Literacy & Organisational Transformation

2:00 pm - 2:30 pm Presentation – Minimising Friction in AI Governance: Balancing Trust, Innovation, Governance and Intellectual Property
Andi McAleer - Head of Data & AI Governance, Financial Times

In an industry where content is the product, AI governance and decision making involves carefully maintaining a balance between risk to assets and potential value. Andi, Head of Data and AI Governance at the Financial Times, shares how a 135-year-old premium news organisation effectively governs a wide programme of AI-enabled solutions and experiments that enable experimentation without undermining their intellectual property or journalistic integrity

·       Creating flexible, multi-tiered governance tools that serve different stakeholder needs—from quick checklists to comprehensive consultations

·       Positioning AI governance as an innovation partner rather than a gatekeeper through approachable, frictionless processes

·       Developing practical methods to evaluate AI use cases against ethical frameworks while maintaining competitive advantage

·       Navigating the tension between exploring new AI-driven business models and safeguarding premium content value

img

Andi McAleer

Head of Data & AI Governance
Financial Times

Governance Stream – AI Literacy & Organisational Transformation

2:30 pm - 3:00 pm Presentation – Session Details to be Announced

Governance Stream – AI Literacy & Organisational Transformation

3:00 pm - 3:30 pm Panel Discussion – Upskilling for the Future: Developing Effective AI Training Programs
Danielle Langford - Responsible AI Specialist, Zurich Insurance
Georgiana Marsic - Former Principal AI Manager, Jaguar Land Rover
Oriana Medlicott - Responsible AI Lead, Admiral Group

As AI continues to evolve, so must the skills of the workforce. This session will examine best practices for developing training and upskilling programs that enable employees to understand and responsibly engage with AI technologies. From foundational education to specialized workshops, we’ll cover how to structure learning paths that meet the needs of both technical and non-technical teams.

·       Design training programs tailored to both technical and non-technical employees.

·       Create learning paths that support responsible AI integration and adoption.

·       Foster continuous AI education to stay ahead in an evolving landscape.

img

Danielle Langford

Responsible AI Specialist
Zurich Insurance

img

Georgiana Marsic

Former Principal AI Manager
Jaguar Land Rover

img

Oriana Medlicott

Responsible AI Lead
Admiral Group

Technical Stream – Responsible AI Design, Development & Deployment

2:00 pm - 2:30 pm Presentation: Regulating AI, Using AI: The FCA’s Dual Role in Shaping and Embedding Responsible AI
Fatima Abukar - Principal Advisor, Responsible AI and Data, FCA

As AI transforms financial services, the Financial Conduct Authority is playing a dual role: setting expectations for responsible innovation across the sector while embedding responsible AI practices within its own organisation. In this session, Fatima, Principal Advisor for Responsible AI and Data, offers a rare window into both sides of that journey. From aligning internal frameworks with data privacy, cyber and legal requirements, to collaborating with DSIT and Ofcom on national policy, Fatima explores what responsible AI means in practice—for regulators and the regulated alike.

 

  • Applying principles internally to support safe and responsible use of AI
  • Shaping future-facing regulation through collaboration, research, and open engagement
  • Building internal capability through data strategy, literacy, and AI-specific governance structures and Data and AI Ethics Frameworks.
img

Fatima Abukar

Principal Advisor, Responsible AI and Data
FCA

Technical Stream – Responsible AI Design, Development & Deployment

2:30 pm - 3:00 pm Presentation - Session Details to be Announced

Technical Stream – Responsible AI Design, Development & Deployment

3:00 pm - 3:30 pm Panel Discussion – Ethical Considerations in AI-Assisted Decisions: Navigating Algorithmic Management
Philippa Penfold - Responsible AI & Data Science Manager, Elsevier
Sarah Matthews - Group Responsible AI Manager, The Adecco Group
Vijay Mitra - Generative AI & Responsible AI Risk Lead, Nationwide Building Society

AI’s increasing role in workplace decisions, from hiring to performance management, raises important ethical concerns. This session will explore the ethical implications of AI-driven decisions, focusing on transparency, fairness, and accountability. We will address the role of governance in ensuring AI systems are used responsibly and in ways that uphold organizational values and employee rights.

·       Examining the ethical impact of AI-assisted decisions on the workforce.

·       Discussing strategies for ensuring fairness, transparency, and accountability in AI algorithms.

·       Managing the ethical challenges of algorithmic management in the workplace.

img

Philippa Penfold

Responsible AI & Data Science Manager
Elsevier

img

Sarah Matthews

Group Responsible AI Manager
The Adecco Group

img

Vijay Mitra

Generative AI & Responsible AI Risk Lead
Nationwide Building Society

Afternoon Plenary Session

3:30 pm - 4:00 pm Afternoon Networking Refreshment Break

4:30 pm - 5:00 pm Afternoon Plenary Presentation - Scaling AI Governance in Healthcare: Balancing Regulation, Risk, and Real-World Impact at Philips

Arlette Van Wissen - Responsible and Sustainable AI Lead, Philips
Ger Janssen - AI Ethics & Compliance Lead, Philips

As a multinational health technology company, Philips operates at the intersection of AI innovation, medical regulation, and enterprise governance. In this session, the Responsible AI team shares how they’re embedding scalable AI risk frameworks into enterprise risk structures—while also navigating the evolving regulatory landscape of the EU AI Act within an already heavily regulated medical domain. From bias mitigation to sustainability, this talk explores what responsible AI looks like when patient safety and compliance are non-negotiable.

·       Aligning AI governance with enterprise risk management across a highly regulated global organisation

·       Translating evolving AI-specific regulations into practical controls within clinical-grade systems

·       Driving bias mitigation strategies tailored to the complexities of healthcare data and use cases

img

Arlette Van Wissen

Responsible and Sustainable AI Lead
Philips

img

Ger Janssen

AI Ethics & Compliance Lead
Philips

4:30 pm - 5:00 pm Presentation - Session Details to be Announced


5:00 pm - 5:30 pm Afternoon Plenary Presentation – Vendor & Third-Party AI Governance: Navigating Shared Responsibility and Risk

Luke Vilain - AI Governance Lead, UBS

As organisations increasingly rely on third-party AI providers, managing AI governance across vendors becomes a critical challenge. Shared responsibility models, vendor due diligence, and AI supply chain risks require a holistic approach to ensure alignment with internal standards and regulatory requirements. This session will explore best practices for managing third-party risks and integrating AI governance into procurement processes.

·       Delving into AI inventories – are they needed?

·       Developing robust shared responsibility frameworks with AI technology providers.

·       Implementing due diligence processes to evaluate AI vendors and tools.

·       Assessing and mitigating risks within the AI supply chain.

img

Luke Vilain

AI Governance Lead
UBS

5:30 pm - 5:30 pm Chairs Closing Remarks & End of Conference