Rhea Saxena is the Technical & Product Lead at the Responsible AI Institute, where she translates ambitious governance ideals into practical, technical safeguards for AI. She spends her days at the crossroads of computer science and policy, building verification frameworks, designing retrieval-augmented governance tools, and probing how frontier AI systems can be deployed both safely and responsibly. Her portfolio spans work on AI safety agents, ethical decision-making in autonomous systems, and conformity assessments grounded in global standards like the NIST AI RMF and ISO/IEC 42001. Rhea holds a Master’s in Computer Science from Virginia Tech and has previously worked with Citi, Duke University, and EY, contributing to projects ranging from digital transformation to AI risk research. What drives her is the challenge of closing the gap between theory and practice: ensuring that conversations about trustworthy AI don’t stay confined to whitepapers, but take shape in the code, systems, and products organizations actually use. She’s especially passionate about embedding ethical choices into the design process so innovation can move fast without breaking trust.
Check out the incredible speaker line-up to see who will be joining Rhea.
Download The Latest Agenda