The pace of AI adoption—and regulation—has accelerated dramatically in the past 12 months. As global frameworks mature and generative AI moves from pilot to production, how are organisations adapting their responsible AI strategies in real time? This opening panel brings together senior leaders to reflect on the most meaningful shifts in the last year—from governance structures and tooling to regulatory readiness and internal culture.
· Responding to evolving AI regulations with practical, cross-functional governance updates
· Adapting RAI frameworks for the age of GenAI, foundation models, and real-time use cases
· Moving from awareness to accountability: What’s working, what’s not, and what’s next
Join us for a grounded look at how Responsible AI has moved from theory to action in the past year—and where the pressure points remain.
As AI systems become more autonomous, organisations face new challenges in governance, security, and ethics. Traditional oversight models struggle to keep pace with AI capable of independent decision-making, raising critical questions about accountability, liability, and risk management. This session explores how enterprises can prepare for the rise of agentic AI while ensuring responsible deployment.
· Developing governance frameworks for increasingly autonomous AI systems.
· Implementing technical safeguards to manage risks and ensure accountability.
· Aligning ethical considerations with business objectives and regulatory expectations.
Slot Reserved for Lead Sponsor Partner
Amid rapid advancements in AI technology, navigating global regulatory frameworks has become paramount for organisations aiming to balance innovation with compliance. This session explores key developments in AI regulation across major jurisdictions, highlighting the EU AI Act, UK, and US approaches while addressing geopolitical influences and strategies for managing regulatory divergence and uncertainty.
· Understanding current global AI regulatory landscapes.
· Navigating regulatory complexities while fostering innovation.
· Developing strategies to ensure compliance and competitive advantage.
As organizations navigate the complex landscape of AI implementation, many struggle with moving beyond theoretical frameworks to practical governance solutions—especially in highly regulated industries like energy. Ania Kaci, Global Leader for Responsible AI at Schneider Electric, brings frontline expertise from building and scaling a cross-functional Responsible AI program across a multinational energy management and industrial automation leader.
Drawing from Schneider Electric's journey developing high-risk AI systems for energy management, predictive maintenance, and industrial automation, Ania will provide actionable insights on:
· Translating EU AI Act requirements into practical governance frameworks that align with sectoral regulations and business objectives
· Building effective AI risk assessment methodologies specifically designed for energy and industrial automation applications
· Creating a thriving Responsible AI community that extends beyond legal and technical teams to drive organization-wide AI literacy
· Balancing innovation with compliance across global operations and regulatory landscapes
· Leveraging AI for sustainability goals while addressing AI's own environmental impact
When scaling AI governance across global organizations, leaders face a key decision: centralized or federated governance? This session will examine the advantages and challenges of both approaches, providing strategies to determine the optimal model for your organization’s size, complexity, and regulatory environment.
· Comparing centralized and federated governance models for global AI deployment.
· Building flexible frameworks that scale with organizational needs and regulations.
· Implementing governance structures that ensure consistency while allowing for local autonomy.
In this interactive roundtable, you will share and dissect real-world case studies of AI governance in action. By coming prepared with examples from your own organizations, we will engage in a dynamic discussion about the challenges and successes of implementing AI governance frameworks. Together, we will explore key themes such as decision-making structures, roles and responsibilities, and the practicalities of operationalizing governance models at scale.
· Sharing and critiquing case studies of AI governance frameworks.
· Discussing challenges and lessons learned from real-world implementations.
· Identifying best practices for scaling and ensuring accountability across AI initiatives.
How do LLMs behave across different languages and cultural contexts — and what does this mean for fairness in real-world deployments? Clara, Responsible AI lead at BBVA, is translating her applied research on multilingual LLM bias into concrete methodologies to evaluate bias in LLM-powered applications. Her work focuses on identifying performance disparities across demographics and vulnerable groups that could lead to discriminatory outcomes.
At Fresenius Medical Care, ensuring patient safety and trust isn’t just a goal—it’s a mandate. As AI becomes embedded in critical systems such as clinical decision tools and operational workflows, the challenge is to ensure patient safety and to protect sensitive personal health data while fostering innovation. Bernie, who drives Product Data & Information Security Governance, will discuss how they address these complex, large-scale challenges, where data privacy, information security, ethics, and regulatory aspects converge with responsible AI deployment.
Monitoring AI systems post-deployment is key to ensuring their continued efficacy and safety. This session will focus on technical frameworks for tracking AI performance, detecting anomalies, and ensuring compliance with established governance practices. Learn about the tools that enable real-time monitoring and proactive management of deployed AI models.
· Learning about frameworks for continuous monitoring of AI system performance.
· Exploring anomaly detection tools to ensure AI systems perform as expected.
· Understanding compliance and governance monitoring for deployed AI.
Nestlé is the world’s largest food and beverage company, with 300,000 employees, a digital ecosystem spanning 185 countries, and an AI portfolio powering everything from demand forecasting to food safety. But how does a non-tech-native organisation govern AI responsibly on a global scale? In this session, Laura will share how Nestlé’s hybrid governance model—balancing global oversight with local flexibility—enables responsible, enterprise-wide AI adoption without losing sight of cultural nuance, operational complexity, or ethical risk. This talk will delve into:
As AI becomes embedded in core business and operational functions, safeguarding these systems requires deep collaboration between AI and cybersecurity teams. At Philip Morris International, Ray, Head of AI Security, is leading efforts to ensure security is not an afterthought—but a foundational principle across AI development. This session will share how PMI is building secure-by-design AI practices in a complex, global environment, while aligning with compliance and risk expectations.
Slot Reserved for Sponsor Partner
Establishing guardrails for responsible AI deployment is essential for minimizing risk and ensuring ethical outcomes. This session will cover how to design and implement content filtering mechanisms and establish safeguard protocols that prevent harmful AI behaviour, especially in sensitive or regulated environments.
· Developing technical guardrails and content filtering for ethical AI deployment.
· Understanding the importance of regulatory compliance in AI safety design.
· Implementing mechanisms for proactive monitoring and control of AI outputs.
As AI adoption accelerates, organisations face a critical challenge—how to drive innovation without compromising ethical standards, security, and compliance. The pressure to stay competitive often clashes with the need for responsible implementation, creating tension between speed and safeguards. This session explores strategies for achieving both.
● Aligning AI innovation with ethical, legal, and business risk considerations.
● Building governance frameworks that support agility without stifling progress.
● Fostering a culture where responsibility and innovation go hand in hand.
Slot Reserved for Sponsor Partner
Ever go to conferences hoping for practical insights that can actually help you do responsible AI better day to day, but come away empty handed after yet another high-level discussion about the EU AI Act? Well this session is for you. It will be interactive, it will be fun, and most of all it will be a chance to get answers to your burning questions about how to survive and thrive as a responsible AI practitioner doing it for real in the fast-changing, complex world of AI..