Fatemeh Tahavori, Ph.D., is a Machine Learning Engineer and Computer Vision Scientist with deep expertise in developing and deploying AI systems across healthcare, automotive, and large-scale population data domains. She is currently a Member of the Technical Staff at Tomoro.ai, where she builds enterprise-grade reusable applied AI components (e.g., retrieval, orchestration, evals) and works alongside clients to accelerate their platforms from idea to prototype to scale. Fatemeh has held senior roles at IQVIA and Continental and contributed to foundational AI development as a founding engineer at a Y Combinator startup. Her work bridges research and production, with a focus on evaluation, interpretability, and trustworthy deployment of machine learning models.
In today's rapidly evolving AI landscape, thorough testing of AI systems has become critical yet significantly more complex than traditional software testing. Organizations struggle with selecting appropriate evaluation methods for different AI modalities, interpreting benchmark results accurately, and establishing reliable validation processes that address fairness, safety, and performance concerns.
· Implementing testing strategies tailored to specific AI types and use cases
· Evaluating benchmark reliability through structured criteria and statistical validation
· Designing comprehensive testing frameworks that anticipate regulatory requirements
This interactive workshop provides practical experience with diverse AI testing methodologies across different system types. You'll depart with a customizable testing framework, techniques to critically assess benchmark results, and strategies to build robust validation processes that can adapt to emerging AI capabilities and governance requirements.
Check out the incredible speaker line-up to see who will be joining Fatemeh.
Download The Latest Agenda