Responsible AI Summit North America: Day 2 - Wednesday, June 24, 2026


Morning Plenary

8:00 am - 8:30 am Registration & Breakfast

8:30 am - 8:40 am Chairs Opening Remarks

AI regulation in the United States is emerging as a patchwork of state laws, federal guidance, and sector-specific rules rather than a single national framework. This panel brings together voices from government, policy, and industry to discuss what is taking shape, what remains uncertain, and which requirements are likely to endure. The conversation focuses on how organizations can operationalise responsible AI governance today and continue to push innovation without waiting for regulatory clarity that may never fully arrive.

• Understanding the practical impact of state, federal, and sector rules.

• Continuing to innovate amid regulatory uncertainty.

• Translating policy signals into enterprise-ready operating models.


img

Shreya Amin

Former Chief AI Officer
New York State

img

Christabel Randolph

Associate Director
Center for AI and Digital Policy

img

Eileen Vidrine

Former Chief Data & Artificial Intelligence Officer
US Department of the Air Force, Chief Data & AI Office

9:10 am - 9:40 am Morning Keynote – Session details to be announced


9:40 am - 10:10 am Morning Presentation – Session Details to be Announced

Polina Zvyagina - Executive Lead Counsel, General Motors


img

Polina Zvyagina

Executive Lead Counsel
General Motors

10:10 am - 10:45 am Morning Roundtable Session – When AI Is Live: A Hands-On Workshop Post-Deployment Realities

Anita Rao - Senior Policy Advisor, U.S. Center for AI Standards and Innovation, NIST

Led by Anita Rao, Senior Policy Advisor at NIST, this interactive workshop draws on NIST’s work exploring challenges organizations face when measuring AI systems post-deployment.

The session will focus on practical, post-deployment methodologies: what are teams measuring today, what’s working, and where do approaches break down as systems evolve. Participants will actively share experiences and help shape future NIST guidelines on AI monitoring and measurement.

img

Anita Rao

Senior Policy Advisor, U.S. Center for AI Standards and Innovation
NIST

10:45 am - 11:10 am Morning Coffee Break

Morning Tracks

Governance Track

11:10 am - 11:40 am Presentation – Session Details To Be Announced

Governance Track

11:40 am - 12:10 pm Presentation – Session Details To Be Announced
Renée Renée Fishel - Director of Responsible AI, Norvartis
img

Renée Renée Fishel

Director of Responsible AI
Norvartis

Governance Track

12:10 pm - 12:40 pm Presentation – Building Trust at Speed: Measuring AI Risk, Governing Third Parties, and Staying Resilient
Aneta Osmola - Vice President Data and AI Risk, Scotiabank

In highly regulated industries, the hardest part of AI is not building models - it’s proving you’re in control. Scotiabank is exploring how to measure and report AI risk in a way that stands up to regulators, especially as agentic systems and third-party AI services accelerate. This session unpacks the practical challenges of balancing mandatory compliance, vendor accountability, and speed-to-market, while building organisational trust, resilience, and security marked as “responsible”.

• Defining risk metrics beyond traditional validation: reporting, controls, and assurance evidence
• Governing third-party AI and agentic services without becoming a blocker to innovation
• Strengthening resilience through security, concentration-risk management, and continuous education.

img

Aneta Osmola

Vice President Data and AI Risk
Scotiabank

Governance Track

12:40 pm - 1:10 pm Panel Discussion – Governing AI Agents: Technology, Employee, or Both?
Hellena Crompton - Data Protection Officer UK&I, dentsu
Cindy Tu - Director of IT & Data Audit, Capital One

AI agents blur the boundary between technology and workforce: they operate within rules, yet exercise discretion in execution. This panel debates whether agentic systems should be governed like software, like employees, or through a new hybrid accountability model. Panellists examine how organizations define risk ownership and prioritize privacy when outcomes are unpredictable and failures carry operational, reputational, and regulatory consequences.

• Defining accountability, approvals, and escalation when agents take autonomous actions.
• Designing controls for unpredictability, discretion, and safe operating boundaries.
• Operationalizing lifecycle oversight: monitoring, rollback, and crisis-response readiness.

img

Hellena Crompton

Data Protection Officer UK&I
dentsu

img

Cindy Tu

Director of IT & Data Audit
Capital One

Technical Track

11:10 am - 11:40 am Presentation – Session Details To Be Announced

Technical Track

11:40 am - 12:10 pm Presentation – Actionable Security for Agentic AI: Securing Clients, Servers, MCPs, and Human-in-the-Loop to Prevent Cascading Risks
Veer Yedlapalli - Director of Product Security, Security Engineering and AI Security, Grainger

The rapid evolution of agentic AI, from single LLM-powered agents to coordinated crews and massive swarms, promises transformative autonomy in domains like supply chain, healthcare, and finance. Frameworks like CrewAI, LangGraph, and Google's ADK, combined with the Model Context Protocol (MCP) standard, enable dynamic agent-to-agent (A2A) collaboration and tool access. Yet this interconnected ecosystem introduces severe cascading risks: a single compromised agent can poison swarms via unsecured MCP calls, leading to data exfiltration, unauthorized actions, or ethical failures.

This technical talk delivers an actionable blueprint for end-to-end security across agentic AI layers, including client-side (agents/swarms), server-side (MCP servers/orchestrators), communications, and Human-in-the-Loop (HITL) integration. Drawing from production deployments and red-team exercises, Veer explore why layered defenses are essential for compliance (EU AI Act, DORA) and trustworthiness.
• One compromised agent can compromise the entire swarm.
• Secure MCPs are critical to preventing cascading failures.
• Layered defenses enable compliance, resilience, and trust.

img

Veer Yedlapalli

Director of Product Security, Security Engineering and AI Security
Grainger

Technical Track

12:10 pm - 12:40 pm Presentation – Enabling AI at Workforce Scale: The Story of Xcel Energy
Mark Lesiw - AI Enablement Program Owner, Xcel Energy

Scaling AI across a workforce introduces real challenges around accountability, trust, and responsible use. In this session, Mark Lesiw, AI Enablement Program Owner at Xcel Energy, shares how Xcel Energy increased active enterprise AI usage from 33% to over 90% across 27,000 employees in under 12 months. Mark introduces Return on Employee, a term he uses to define AI value across a workforce, focusing on productivity, confidence, and decision quality, alongside traditional Return on Investment. He explains how Responsible AI was embedded from the start through literacy, tool-level guardrails, and human-in-the-loop design so that AI use could scale safely and deliver lasting business value.

• Responsible AI starts with tool design.
• Return on Employee makes AI value clear and human.
• Accountability stays with people, not systems.

img

Mark Lesiw

AI Enablement Program Owner
Xcel Energy

Technical Track

12:40 pm - 1:10 pm Panel Discussion – Generative AI for Real-Time Cyber Defence
Henry Awere - Responsible AI Operations Lead and Technology Risk, Canada Life
Ayaz Minhas - AI Policy Manager, Meta

Generative AI is advancing from drafting alerts to actively supporting cyber defence operations. This session explores real-world applications of threat detection, cyber hygiene, attack path analysis, and orchestrated response. It highlights practical insights into how organizations are using AI to responsibly enhance detection and strengthen cyber resilience.

• Detecting and analyzing cyber threats in real time.
• Automating cyber hygiene and response safely.
• Balancing AI speed with human oversight.

img

Henry Awere

Responsible AI Operations Lead and Technology Risk
Canada Life

img

Ayaz Minhas

AI Policy Manager
Meta

Lunch

1:10 pm - 2:00 pm Lunch

Afternoon Tracks

Governance Track

2:00 pm - 2:30 pm Presentation – Session Details To Be Announced

Governance Track

2:30 pm - 3:00 pm Presentation – Adaptive AI in Complex Contexts: Balancing Local Realities with Global Guardrails
André Heller Pérache - Director of AI, International Rescue Committee

Deploying AI in humanitarian and crisis settings requires systems that adapt to local realities without compromising responsibility. In this presentation, André Heller shares how the International Rescue Committee embedded aprendIA, an AI-powered chatbot, to support educators in Nigeria while navigating stakeholder needs, data protection laws, and resource constraints. The session explores how global responsible AI principles can be translated into practical, locally grounded deployments that scale safely across regions.

• Adapting AI systems to local cultural, regulatory, and operational contexts.

• Scaling AI solutions efficiently in constrained-resource environments.

• Applying global responsible AI guardrails to real-world humanitarian deployments.


img

André Heller Pérache

Director of AI
International Rescue Committee

Governance Track

3:00 pm - 3:30 pm Panel Discussion – Inside the AI Governance Council: Who’s In, Who Decides, and What Actually Works
Teresa Reilly - VP, Data + AI Governance, Kenvue
Gary Brown - Chief Privacy Officer and AI Governance Corporate Lead, Westinghouse

AI systems are now deeply embedded in core products, workflows, and decision-making, making governance a daily operating requirement, not a periodic review exercise. This panel examines how organizations are building cross-functional AI governance councils that are technically relevant, fast enough to keep pace with delivery teams, and clear on who has a seat and why. Speakers will break down how roles are defined in practice, how councils operate beyond high-risk use cases, and how governance models are built without forcing legacy policies onto AI systems that behave fundamentally differently.

• Seat design determines speed, trust, and outcomes.
• Governance must integrate builders, not police them.
• Success metrics beat principles and static playbooks.

img

Teresa Reilly

VP, Data + AI Governance
Kenvue

img

Gary Brown

Chief Privacy Officer and AI Governance Corporate Lead
Westinghouse

Technical Track

2:00 pm - 2:30 pm Presentation – Red Teaming AI Agents: Identifying High-Impact Failure Modes
Shone Mousseiri - Head of AI Model Validation and Governance, Manulife

As agents and GenAI systems gain autonomy, traditional testing no longer exposes the highest-risk failures. This presentation explores adversarial testing techniques used to surface jailbreaks, tool misuse, data leakage, and unsafe autonomous behaviour. It focuses on how technical teams operationalise red teaming as a continuous capability rather than a one-time exercise.

• Testing agents for jailbreaks and harmful autonomy.
• Simulating tool misuse and data exfiltration paths.
• Embedding red teaming into delivery pipelines.

img

Shone Mousseiri

Head of AI Model Validation and Governance
Manulife

Technical Track

2:30 pm - 3:00 pm Presentation – Session Details To Be Announced
Jennifer Hobbs - VP, Data and Analytics, Lead Data Scientist, Zurich North America


img

Jennifer Hobbs

VP, Data and Analytics, Lead Data Scientist
Zurich North America

Technical Track

3:00 pm - 3:30 pm Presentation – Securing Agentic AI Systems: When Guardrails Stop Working
Apostol Vassilev - Expert in Trustworthy and Responsible AI and Cybersecurity, NIST

Unlike traditional software, agentic AI systems do not just execute instructions: they reason, act, and adapt through persistent interaction with tools and APIs. In this session, the speaker examines how this shift fundamentally breaks long-standing cybersecurity assumptions, particularly the reliance on prompt-based guardrails to control LLMs against attacks. They explore why guardrails are structurally insufficient against adaptive adversaries, and what it realistically means to defend systems where autonomy itself expands the attack surface.

• How agent infrastructure expands the cybersecurity attack surface.

• Why finite guardrails fail against adaptive prompt injection.

• Reframing AI security in the age of agentic.


img

Apostol Vassilev

Expert in Trustworthy and Responsible AI and Cybersecurity
NIST

Afternoon Plenary

4:00 pm - 4:30 pm Afternoon Plenary Keynote – Session details to be announced

4:30 pm - 5:00 pm Afternoon Plenary Keynote – Caught in the Crossfire: Navigating AI State Laws, Privacy Obligations, and the Consent Gap

Nereida Parks - AI, Regional Privacy and HIPAA Officer, OCA, Olympus Corporation

As U.S. state AI laws accelerate, organizations are increasingly caught between new AI-specific mandates and existing state privacy obligations. For the medical technology sector, the tension is acute: AI innovation relies on data use and third-party vendor relationships that often outpace existing consent and privacy models. In this session, Nereida Parks, AI Regional Privacy and HIPPA Officer at Olympus Corporation, unpacks the legal grey areas where state AI laws, privacy statutes, and vendor accountability collide. The discussion offers practical ways to navigate no-win compliance scenarios where risk must be actively managed, and decisions must be taken without harmonized legal guidance.

• Navigating conflicting AI state laws and privacy requirements.
• Managing consent breakdowns across AI and third-party vendors.
• Operationalizing risk-based decisions without clear legal precedent.

img

Nereida Parks

AI, Regional Privacy and HIPAA Officer, OCA
Olympus Corporation

Moving from principle to practice, organizations face growing pressure to translate assurance, transparency and security into day-to-day decision-making. This closing panel focuses on what comes next, highlighting practical actions that organizations can take to govern AI systems.

• Moving from frameworks to repeatable governance operations.
• Embed assurance into procurement, deployment and monitoring.
• Progress requires alignment across multiple teams.

img

Oliver Patel

Head of Enterprise AI Governance
AstraZeneca

img

Alberto Rivera-Fournier

Chief Ethics Officer
Inter-American Development Bank (IDB)

img

Megan Bentley

Boardmember
The Center for Independent Living in Berkeley, CA.

5:30 pm - 5:30 pm Chair’s Closing Remarks & End of Conference