Stephanie is helping lead OVO’s efforts to embed responsible AI into every stage of the development process, from implementing technical evaluations to creating people-focused policies. Stephanie previously served as a Data Scientist and Responsible AI Assessment Lead at Armilla AI, an AI evaluation startup. She has also held roles at the Alan Turing Institute, where she led research on genAI risks and AI regulation and standards, and at the Responsible AI Institute, a non-profit dedicated to building AI governance tools. She holds an MSc in Applied Mathematics from McGill University in Montreal, where her research focused on statistical methods for evaluating and tackling ML bias, as well as regulatory challenges for ensuring the safe use of AI. Her spare time is spent devouring innumerable fantasy novels, slowly improving her (still shaky) tennis skills, and dreaming about where to travel next.
Establishing guardrails for responsible AI deployment is essential for minimizing risk and ensuring ethical outcomes. This session will cover how to design and implement content filtering mechanisms and establish safeguard protocols that prevent harmful AI behaviour, especially in sensitive or regulated environments.
· Developing technical guardrails and content filtering for ethical AI deployment.
· Understanding the importance of regulatory compliance in AI safety design.
· Implementing mechanisms for proactive monitoring and control of AI outputs.
Check out the incredible speaker line-up to see who will be joining Stephanie.
Download The Latest Agenda