Romain Bourboulou, Ph.D. is an AI engineer at Tomoro AI, where he develops production-ready generative AI solutions. He began his career as a neuroscience researcher, investigating how the brain forms internal models of the world, before moving into applied AI. Over the past years, he has combined this scientific foundation with expertise in generative AI, statistical modelling, and agent-based systems, delivering impactful projects across academia, government, the energy sector and the video game industry. Today, Romain is dedicated to developing Generative AI solutions that are not only powerful but also safe, interpretable, and human-centered.
In today's rapidly evolving AI landscape, thorough testing of AI systems has become critical yet significantly more complex than traditional software testing. Organizations struggle with selecting appropriate evaluation methods for different AI modalities, interpreting benchmark results accurately, and establishing reliable validation processes that address fairness, safety, and performance concerns.
· Implementing testing strategies tailored to specific AI types and use cases
· Evaluating benchmark reliability through structured criteria and statistical validation
· Designing comprehensive testing frameworks that anticipate regulatory requirements
This interactive workshop provides practical experience with diverse AI testing methodologies across different system types. You'll depart with a customizable testing framework, techniques to critically assess benchmark results, and strategies to build robust validation processes that can adapt to emerging AI capabilities and governance requirements.
Check out the incredible speaker line-up to see who will be joining Romain.
Download The Latest Agenda