The pace of AI adoption - and regulation - has accelerated dramatically in the past 12 months. As global frameworks mature and generative AI moves from pilot to production, how are organisations adapting their responsible AI strategies in real time? This opening panel brings together senior leaders to reflect on the most meaningful shifts in the last year—from governance structures and tooling to regulatory readiness and internal culture.
· Responding to evolving AI regulations with practical, cross-functional governance updates
· Adapting RAI frameworks for the age of GenAI, foundation models, and real-time use cases
· Moving from awareness to accountability: What’s working, what’s not, and what’s next
Join us for a grounded look at how Responsible AI has moved from theory to action in the past year - and where the pressure points remain.
Slot Reserved for Infosys, Talk Details to be Announced
As organizations navigate the complex landscape of AI implementation, many struggle with moving beyond theoretical frameworks to practical governance solutions - especially in highly regulated industries like energy. Ania Kaci, Global Leader for Responsible AI at Schneider Electric, brings frontline expertise from building and scaling a cross-functional Responsible AI program across a multinational energy management and industrial automation leader.
Drawing from Schneider Electric's journey developing high-risk AI systems for energy management, predictive maintenance, and industrial automation, Ania will provide actionable insights on:
· Translating EU AI Act requirements into practical governance frameworks that align with sectoral regulations and business objectives
· Building effective AI risk assessment methodologies specifically designed for energy and industrial automation applications
· Creating a thriving Responsible AI community that extends beyond legal and technical teams to drive organization-wide AI literacy
· Balancing innovation with compliance across global operations and regulatory landscapes
· Leveraging AI for sustainability goals while addressing AI's own environmental impact
When scaling AI governance across global organizations, leaders face a key decision: centralized or federated governance? This session will examine the advantages and challenges of both approaches, providing strategies to determine the optimal model for your organization’s size, complexity, and regulatory environment.
· Comparing centralized and federated governance models for global AI deployment.
· Building flexible frameworks that scale with organizational needs and regulations.
· Implementing governance structures that ensure consistency while allowing for local autonomy.
As AI becomes embedded across every business function, the question of where AI governance belongs—both structurally and culturally—grows increasingly critical. This session will explore whether AI governance should stand as its own dedicated function, how organizations can elevate it as a priority, and how companies are resourcing and evolving their approaches.
The discussion will tackle key questions, including:
How do LLMs behave across different languages and cultural contexts - and what does this mean for fairness in real-world deployments? Clara, Responsible AI lead at BBVA, is translating her applied research on multilingual LLM bias into concrete methodologies to evaluate bias in LLM-powered applications. Her work focuses on identifying performance disparities across demographics and vulnerable groups that could lead to discriminatory outcomes.
As AI adoption accelerates across every discipline, it’s critical to keep all colleagues informed and alert to the risks, limitations and biases inherent in AI systems. In this visual and insightful session, Richard Boorman will share how he has successfully engaged teams across a complex global enterprise - using compelling AI tests and memorable AI fails, alongside external research findings, to spark curiosity, drive understanding and promote responsible AI practices.
Mastercard is recognized for operating one of the most advanced AI Governance programs in the industry. Established in 2019, the program rigorously evaluates all internally developed and externally sourced AI systems to ensure they meet high standards of ethical, responsible and human-centric design and deployment.
Governance frameworks often focus on principles, but lack the mechanisms to prove systems meet performance, security, and ethical requirements over time. This session explores how evidence-driven assurance can bridge that gap, providing measurable confidence from ideation to decommissioning. Advai will share how decision gates, lifecycle-aligned testing, and continuous monitoring create traceable evidence for risk-based decisions and regulatory readiness - without slowing innovation.
Understand how to operationalize Responsible AI through governance that’s measurable, auditable, and built for evolving risk landscapes.
Monitoring AI systems post-deployment is key to ensuring their continued efficacy and safety. This session will focus on technical frameworks for tracking AI performance, detecting anomalies, and ensuring compliance with established governance practices. Learn about the tools that enable real-time monitoring and proactive management of deployed AI models.
· Learning about frameworks for continuous monitoring of AI system performance.
· Exploring anomaly detection tools to ensure AI systems perform as expected.
· Understanding compliance and governance monitoring for deployed AI.
In this session, Laura will share how PepsiCo's hybrid governance model - balancing global oversight with local flexibility - enables responsible, enterprise-wide AI adoption without losing sight of cultural nuance, operational complexity, or ethical risk. This talk will delve into:
As AI adoption accelerates, governance in large enterprises is under strain, caught between regulatory pressure, organizational complexity, and the need for speed. This session distills lessons from some of the world’s largest organizations on how to navigate that tension and make governance a foundation for both trust and productivity.
In this panel discussion, we will share and dissect real-world case studies of AI governance in action.
Together, we will explore key themes such as decision-making structures, roles and responsibilities, and the practicalities of operationalizing governance models at scale.
· Sharing and critiquing case studies of AI governance frameworks.
· Discussing challenges and lessons learned from real-world implementations.
· Identifying best practices for scaling and ensuring accountability across AI initiatives.
As AI becomes embedded in core business and operational functions, safeguarding these systems requires deep collaboration between AI and cybersecurity teams. At Philip Morris International, Ray, Head of AI Security, is leading efforts to ensure security is not an afterthought - but a foundational principle across AI development. This session will share how PMI is building secure-by-design AI practices in a complex, global environment, while aligning with compliance and risk expectations.
As generative AI rapidly integrates into banking workflows - from customer service chatbots to advisory copilots - the risks of hallucination, toxicity, privacy violations, and regulatory non-compliance are rising. Intesa Sanpaolo, one of Europe’s largest banking groups, is tackling this head-on. In this session, Alessandro, Head of Responsible AI, shares how his team is designing and implementing technical guardrails around generative AI models, with a sharp focus on risk mapping, prompt injection protection, and fundamental rights impact assessments.
Establishing guardrails for responsible AI deployment is essential for minimizing risk and ensuring ethical outcomes. This session will cover how to design and implement content filtering mechanisms and establish safeguard protocols that prevent harmful AI behaviour, especially in sensitive or regulated environments.
· Developing technical guardrails and content filtering for ethical AI deployment.
· Understanding the importance of regulatory compliance in AI safety design.
· Implementing mechanisms for proactive monitoring and control of AI outputs.
Amid rapid advancements in AI technology, navigating global regulatory frameworks has become paramount for organisations aiming to balance innovation with compliance. This session explores key developments in AI regulation across major jurisdictions, highlighting the EU AI Act, UK, and US approaches while addressing geopolitical influences and strategies for managing regulatory divergence and uncertainty.
· Understanding current global AI regulatory landscapes.
· Navigating regulatory complexities while fostering innovation.
· Developing strategies to ensure compliance and competitive advantage.
Ensuring the safety of agentic AI systems requires thorough testing and verification methods. This session will dive into the best practices for testing autonomous AI, focusing on verification protocols that ensure the systems behave predictably and ethically under all conditions.
· Exploring testing methodologies for ensuring the safety of autonomous AI systems.
· Implementing verification processes that guarantee compliance with governance standards.
· Addressing the challenges of testing agentic AI and its ethical implications.