As AI continues to reshape industries, the EU AI Act’s emphasis on AI literacy has made it a critical priority for enterprises. Ensuring that employees are well-versed in AI technologies isn’t just about compliance—it's essential for fostering innovation, mitigating risk, and building trust in AI systems. Organisations must invest in upskilling their workforce to ensure a smooth transition into an AI-powered future while adhering to new regulations.
· Equipping employees with AI literacy to meet EU AI Act mandates and compliance standards.
· Reskilling and upskilling talent for an AI-powered, regulated workforce.
· Building a culture of collaboration where human expertise and AI complement one another.
Slot Reserved for Sponsor Partner
Bring your real-world responsible AI challenges, and tap into the collective intelligence of your peers for innovative solutions and fresh perspectives.
Key Takeaways:
As energy systems digitise and GenAI adoption accelerates, critical infrastructure operators face new regulatory scrutiny, cyber threats, and resilience risks. At Centrica, building a Responsible AI Framework has been key to scaling innovation while safeguarding operations, customers, and society. Attend this talk to understand how Ronnie and his team are:
· Embedding AI governance across GenAI, ML, and critical energy infrastructure systems
· Aligning risk tiering with regulatory, cyber, and environmental resilience expectations
· Translating responsible AI principles into action across complex, distributed operations
As one of the UK’s leading energy providers, OVO Energy is focused on enabling a greener future—and that includes how it uses AI. In this session, Myrna Macgregor, AI Risk Lead, explores the dual challenge of using data and AI to support decarbonisation, while also addressing the hidden environmental costs of AI itself. From energy optimisation to model accountability, discover how a responsible approach to AI can serve both innovation and sustainability goals.
As AI adoption accelerates across industries, ensuring responsible, scalable, and consistent deployment is more critical than ever—especially in highly regulated sectors like pharma. At Novo Nordisk, responsible AI is not a siloed initiative but combines collective effort across the enterprise with area and domain specific requirements. From aligning with evolving legislation like the EU AI Act to making it easier for practitioners to comply, this session explores how practical tooling, a trustworthy AI council, and other strong AI governance can turn frameworks into action.
As AI adoption grows, so does the need for machines—and humans—to understand data in context. In the Scottish Government, Masood and the Digital Directorate’s technical team are architecting the data foundations that make responsible, explainable AI possible. This session explores the hands-on work behind enabling AI systems to interact with public sector data through intelligent tagging, synonym resolution, and semantic modelling. Learn how knowledge graphs, metadata enrichment, and NLP interfaces are unlocking scalable, governed access to complex datasets—without compromising on accountability.
· Architecting semantic layers using knowledge graphs to power explainability and data lineage
· Automating tagging, classification, and synonym resolution across siloed, federated data sources
· Enabling NLP-driven “talk to data” interfaces to widen access without exposing risk
Ensuring the safety of agentic AI systems requires thorough testing and verification methods. This session will dive into the best practices for testing autonomous AI, focusing on verification protocols that ensure the systems behave predictably and ethically under all conditions.
· Exploring testing methodologies for ensuring the safety of autonomous AI systems.
· Implementing verification processes that guarantee compliance with governance standards.
· Addressing the challenges of testing agentic AI and its ethical implications.
As generative AI rapidly integrates into banking workflows—from customer service chatbots to advisory copilots—the risks of hallucination, toxicity, privacy violations, and regulatory non-compliance are rising. Intesa Sanpaolo, one of Europe’s largest banking groups, is tackling this head-on. In this session, Alessandro, Head of Responsible AI, shares how his team is designing and implementing technical guardrails around generative AI models, with a sharp focus on risk mapping, prompt injection protection, and fundamental rights impact assessments.
Alessandro Castelnovo, Head of Responsible AI, Intesa Sanpaolo
In an industry where content is the product, AI governance and decision making involves carefully maintaining a balance between risk to assets and potential value. Andi, Head of Data and AI Governance at the Financial Times, shares how a 135-year-old premium news organisation effectively governs a wide programme of AI-enabled solutions and experiments that enable experimentation without undermining their intellectual property or journalistic integrity
· Creating flexible, multi-tiered governance tools that serve different stakeholder needs—from quick checklists to comprehensive consultations
· Positioning AI governance as an innovation partner rather than a gatekeeper through approachable, frictionless processes
· Developing practical methods to evaluate AI use cases against ethical frameworks while maintaining competitive advantage
· Navigating the tension between exploring new AI-driven business models and safeguarding premium content value
As AI continues to evolve, so must the skills of the workforce. This session will examine best practices for developing training and upskilling programs that enable employees to understand and responsibly engage with AI technologies. From foundational education to specialized workshops, we’ll cover how to structure learning paths that meet the needs of both technical and non-technical teams.
· Design training programs tailored to both technical and non-technical employees.
· Create learning paths that support responsible AI integration and adoption.
· Foster continuous AI education to stay ahead in an evolving landscape.
As AI transforms financial services, the Financial Conduct Authority is playing a dual role: setting expectations for responsible innovation across the sector while embedding responsible AI practices within its own organisation. In this session, Fatima, Principal Advisor for Responsible AI and Data, offers a rare window into both sides of that journey. From aligning internal frameworks with data privacy, cyber and legal requirements, to collaborating with DSIT and Ofcom on national policy, Fatima explores what responsible AI means in practice—for regulators and the regulated alike.
AI’s increasing role in workplace decisions, from hiring to performance management, raises important ethical concerns. This session will explore the ethical implications of AI-driven decisions, focusing on transparency, fairness, and accountability. We will address the role of governance in ensuring AI systems are used responsibly and in ways that uphold organizational values and employee rights.
· Examining the ethical impact of AI-assisted decisions on the workforce.
· Discussing strategies for ensuring fairness, transparency, and accountability in AI algorithms.
· Managing the ethical challenges of algorithmic management in the workplace.
As a multinational health technology company, Philips operates at the intersection of AI innovation, medical regulation, and enterprise governance. In this session, the Responsible AI team shares how they’re embedding scalable AI risk frameworks into enterprise risk structures—while also navigating the evolving regulatory landscape of the EU AI Act within an already heavily regulated medical domain. From bias mitigation to sustainability, this talk explores what responsible AI looks like when patient safety and compliance are non-negotiable.
· Aligning AI governance with enterprise risk management across a highly regulated global organisation
· Translating evolving AI-specific regulations into practical controls within clinical-grade systems
· Driving bias mitigation strategies tailored to the complexities of healthcare data and use cases
As organisations increasingly rely on third-party AI providers, managing AI governance across vendors becomes a critical challenge. Shared responsibility models, vendor due diligence, and AI supply chain risks require a holistic approach to ensure alignment with internal standards and regulatory requirements. This session will explore best practices for managing third-party risks and integrating AI governance into procurement processes.
· Delving into AI inventories – are they needed?
· Developing robust shared responsibility frameworks with AI technology providers.
· Implementing due diligence processes to evaluate AI vendors and tools.
· Assessing and mitigating risks within the AI supply chain.