Responsible AI Summit 2025 Workshop Day
As Responsible AI continues to evolve into a distinct and in-demand discipline, many professionals are asking the same questions: How do I enter the RAI space? How do I grow within it? This interactive workshop explores both the individual skills and the organisational capabilities needed to scale responsible AI.
Whether you’re shaping governance, influencing leadership, or driving implementation, understanding the emerging competency landscape is key—not only for your organisation, but for your own RAI career trajectory. Attend this workshop to delve into:
Walk away with clarity on the competencies shaping the future of RAI, and practical tools to map your team’s needs and your own career progression in this growing field.
The increasing adoption of AI technologies demands a fundamental shift in how organizations approach system design and implementation. Traditional software development methodologies fail to address the unique challenges posed by AI systems, particularly regarding human-centered design considerations and ethical risk management. Organizations that prioritize robust user research and thoughtful design phases consistently achieve stronger ROI on their AI investments.
· Integrating comprehensive user research before technical development begins
· Implementing continuous risk assessment frameworks adapted for AI-specific challenges
· Establishing cross-functional teams bridging technical expertise with design thinking
Join this interactive deep-dive workshop to develop practical frameworks for adapting your organization's AI development lifecycle. You'll leave with actionable strategies to rebalance project resources toward human-centric design principles, risk identification methodologies, and oversight mechanisms that extend beyond conventional software development approaches.
In today's rapidly evolving AI landscape, thorough testing of AI systems has become critical yet significantly more complex than traditional software testing. Organizations struggle with selecting appropriate evaluation methods for different AI modalities, interpreting benchmark results accurately, and establishing reliable validation processes that address fairness, safety, and performance concerns.
· Implementing testing strategies tailored to specific AI types and use cases
· Evaluating benchmark reliability through structured criteria and statistical validation
· Designing comprehensive testing frameworks that anticipate regulatory requirements
This interactive workshop provides practical experience with diverse AI testing methodologies across different system types. You'll depart with a customizable testing framework, techniques to critically assess benchmark results, and strategies to build robust validation processes that can adapt to emerging AI capabilities and governance requirements.
As AI systems become increasingly autonomous, understanding agentic AI is critical for responsible deployment. This interactive workshop will introduce the fundamentals of AI agents, their capabilities, and governance challenges. Attendees will explore real-world use cases and best practices for oversight, risk mitigation, and ethical considerations.
Key Takeaways:
· Learn the fundamentals of AI agents and their evolving role in enterprise AI.
· Explore governance frameworks for managing autonomous AI systems.
· Identify technical and ethical challenges in deploying agentic AI.
Through group exercises and discussions, participants will gain practical insights into governing AI agents responsibly.