Responsible AI Summit 2025 Workshop Day
As AI systems gain unprecedented autonomy, traditional governance frameworks are struggling to keep up. Agentic AI is making decisions faster than humans can intervene, and organisations that cling to conventional oversight risk operational, ethical, and reputational failures.
This interactive workshop equips leaders with the tools and frameworks needed to design governance systems that are both robust and adaptable in the age of autonomous AI.
Attendees will:
• Reimagine decision rights and escalation paths for AI systems operating independently.
• Explore human-in-the-loop frameworks that scale without slowing innovation.
• Assign accountable technical leadership to ensure transparency and auditability.
Walk away with a practical blueprint for next-generation AI governance, including risk matrices, escalation protocols, and real-world dashboards to ensure your organisation can deploy autonomous systems safely, ethically, and at scale.
Responsible AI is no longer a technical or compliance issue alone — it is a leadership and trust challenge. Yet many Responsible AI initiatives struggle to gain traction at the executive level due to misaligned language, competing priorities, and unclear business value.
This interactive workshop is designed to help Responsible AI leaders, practitioners, and cross functional teams confidently engage senior leadership and the C suite on Responsible AI. The session focuses on how to communicate RAI in a way that resonates with business leaders, aligns with organisational strategy, and drives informed decision making without stalling innovation.
Through practical exercises and real world scenarios, participants will explore how to frame Responsible AI in terms of risk, value, reputation, and growth, and how to move conversations beyond principles to action.
Attendees will:
• Understand what C suite leaders care about when it comes to AI risk, opportunity, and trust, and how to tailor Responsible AI messaging accordingly.
• Translate Responsible AI concepts into clear, business relevant narratives that resonate with executive priorities.
• Build confidence in influencing senior stakeholders, handling challenge, and addressing common executive objections.
• Develop communication strategies to secure leadership buy in and align stakeholders across the organisation.
Walk away with practical frameworks, messaging tools, and leadership ready talking points to successfully position Responsible AI as a strategic enabler, balancing speed, innovation, and trust at the highest levels of the organisation.
As the EU AI Act moves from legislation to implementation, many organisations are turning to AI standards for guidance — often without a clear sense of what these standards are, how they are developed, or how they are meant to be used. For Legal, Governance, Risk, and Responsible AI teams, standards can feel abstract or overly technical.
This workshop takes a practical, introductory approach to using AI standards as a support for EU AI Act implementation. It is designed for leaders who may be familiar with standards in principle, but are seeking clarity on which standards are relevant, how international ISO/IEC standards differ from emerging EU specific standards, and how they can be referenced proportionately.
The session begins with a concise overview of the AI standards landscape, followed by discussion of how standards are typically used in practice, not as compliance checklists, but as reference frameworks to help structure governance, documentation, and accountability.
Attendees will:
• Gain an overview of international and EU specific AI standards relevant to the EU AI Act
• Understand how standards can support, but not replace, legal and governance judgement
• Explore proportionate ways to use standards based on AI risk, maturity, and context
• Identify practical entry points for referencing standards across legal, risk, engineering, and procurement teams
Leave with a clearer and more realistic understanding of how AI standards can be used as a pragmatic support tool for EU AI Act implementation, without adding unnecessary complexity.
The AI landscape is moving faster than any governance or compliance framework can adapt. The AI leaders of tomorrow will be those who anticipate disruption, understand emerging risks, and align their organisations to thrive in a future defined by autonomous systems, generative models, and global regulation.
This forward-looking workshop equips leaders, executives, and RAI practitioners with the tools to future-proof their AI strategies and make decisions that will stand the test of time.
Attendees will:
• Identify emerging AI risks, from autonomous decision-making to cross-border regulatory challenges.
• Explore horizon scanning and scenario planning techniques to anticipate disruption before it happens.
• Build a culture capable of agile, responsible innovation and ethical decision-making at scale.
Leave with strategic foresight tools, practical frameworks, and actionable scenarios that prepare for the AI challenges and opportunities of the next 2-4 years.