Responsible AI Summit North America: Day 1 - Tuesday, June 23, 2026


Morning Plenary

8:00 am - 8:45 am Registration & Breakfast

8:45 am - 9:00 am Chairs Opening Remarks

9:00 am - 9:30 am Opening Panel Discussion – Control vs. Innovation: Scaling Responsible AI in a Fast-Moving Reality

John Downey - Chief Information Security Officer, GoFundMe
Liza Levitt - Vice President, Deputy General Counsel -Platforms, Responsible AI, Emerging Tech, Intuit

As AI adoption outpaces formal regulation in the U.S., large enterprises are being forced to lead on responsible AI without a prescriptive rulebook. This opening panel sets the tone for 2026, exploring how risk-based governance, self-regulatory commitments and evidence-driven decision-making can unlock innovation, rather than constrain it.

• Balancing innovation and accountability by applying risk-based governance models that scale across diverse AI use.

• Embedding practical standards and measurable evidence of responsible AI in action.

• Accelerating enterprise-wide AI enablement in a largely unregulated American landscape.


img

John Downey

Chief Information Security Officer
GoFundMe

img

Liza Levitt

Vice President, Deputy General Counsel -Platforms, Responsible AI, Emerging Tech
Intuit

9:30 am - 10:00 am Morning Keynote – Session details to be announced


10:00 am - 10:30 am Morning Presentation – When AI Acts: Defining Responsibility for AI Agents

Oliver Patel - Head of Enterprise AI Governance, AstraZeneca

AI agents are moving from assistance to action by planning, executing, and triggering outcomes across business systems. This plenary examines the challenges organizations face as agentic AI enters core operating environments, including unclear accountability and expanding risk surfaces. Speakers will share practical approaches to governing AI agents responsibly while maintaining efficiency, control, and trust.

• Defining accountability for autonomous agent actions.

• Managing risk as agents operate across systems.

• Maintaining oversight without slowing execution.


img

Oliver Patel

Head of Enterprise AI Governance
AstraZeneca

10:30 am - 11:00 am Morning Refreshment Break

Morning Tracked Sessions

Governance Track

11:00 am - 11:30 am Presentation – Session Details To Be Announced


Governance Track

11:30 am - 12:00 pm Presentation – Responsible AI at Lowe’s in a Fragmented Regulatory Landscape
Matt Bedsole - Director of AI Governance, Lowe's

AI adoption is outpacing governance, particularly in the U.S., where AI regulation is inconsistent across states and remains unsettled at the federal level. In this session, Matt Bedsole, Director of AI Governance at Lowe’s, shares how Lowe’s is building a scalable Responsible AI governance program anchored in practical software lifecycle controls, security and privacy reviews, and end-to-end traceability. Matt explains how Lowe’s is enabling rapid AI adoption while maintaining accountability and trust, with a clear focus on empowering the workforce and driving business value, not reducing headcount.

• Reframing AI use-case intake through risk-based escalation with clear ownership and decision thresholds.
• Automating governance workflows with human-in-the-loop review and audit-ready logging.
• Enabling employees with approved tools, built-in traceability, and enforceable consequences for misuse.

img

Matt Bedsole

Director of AI Governance
Lowe's

Governance Track

12:00 pm - 12:30 pm Presentation – Open-Source AI at Scale: Legal Risk, Model Governance, and the Evolving Shape of Responsible AI Teams
Franklin Graves - Senior Counsel - Product & Data (AI), LinkedIn

Open-source models have become foundational to enterprise AI, shifting legal risk from a one-time review to an ongoing operational challenge. At LinkedIn, Responsible AI now spans model documentation, traceability, and compliance with expanding requirements such as the EU AI Act and emerging U.S. state laws. In this session, Franklin Graves, Senior Counsel, Product and Data (AI) at LinkedIn, shares what has changed since last year, including how legal teams are operationalizing large-scale model reviews, how Responsible AI roles are evolving across the organization, and how agentic AI is beginning to help teams keep pace with growing complexity.
• Mapping open-source legal risk, including licensing, provenance, and downstream liability.
• Scaling model review through standardised documentation and audit-ready workflows.
• Using internal AI assistants to accelerate policy interpretation and Responsible AI program execution.

img

Franklin Graves

Senior Counsel - Product & Data (AI)
LinkedIn

Governance Track

12:30 pm - 1:00 pm Panel Discussion – Invisible AI, Accountable Systems: Governing Privacy at Scale
Aveen Sufi - Director Privacy Operations | Legal, Scan
Shana Morgan - Global Head of AI / Privacy, L3Harris Technologies

Invisible AI introduces a governance challenge: systems that continuously sense, infer, and act without explicit user interaction stretch traditional privacy and Responsible AI frameworks. When AI operates ambiently, responsibility shifts from user choice to organizational accountability, requiring stronger internal controls, oversight, and risk ownership. This panel focuses on how leaders can operationalize Responsible AI to manage privacy at scale when AI is embedded everywhere.

• Governance replaces user-led consent.

• Responsible AI requires inference accountability.

• Privacy must be enforced systemically.


img

Aveen Sufi

Director Privacy Operations | Legal
Scan

img

Shana Morgan

Global Head of AI / Privacy
L3Harris Technologies

Technical Track

11:30 am - 12:00 pm Presentation – Session Details To Be Announced

Technical Track

11:30 am - 12:00 pm Presentation – Navigating the Shift from IT Governance to AI Governance at Mars
Brandel Kremer - Director of Data Governance, Mars

Unlike traditional IT, AI systems don’t just follow rules. They learn, adapt, and change based on how we interact with them. In launching Mars’ AI for product innovation, Brandel has firsthand experience overcoming governance setbacks and unanswered questions that traditional IT models were never designed to handle. She shares hard-won learnings from monitoring, controlling, and governing learning AI at scale at Mars.

• Identifying what breaks when AI meets IT governance.
• Testing systems without fixed outcomes.
• Implementing guardrails to limit drift and hallucinations.

img

Brandel Kremer

Director of Data Governance
Mars

Technical Track

12:00 pm - 12:30 pm Presentation – AI Governance Is Security: How Lenovo Built a Framework for Enterprise AI Risk
Christopher Campbell - Director of AI Governance and Global Product and Services Security Leader, Lenovo

As enterprises scale AI across products and regions, governance has emerged as a fundamental security challenge rather than a compliance exercise. Christopher Campbell, Director of AI Governance and Global Product Security Lead at Lenovo, will outline how he built an AI governance framework that embeds security controls across the entire AI lifecycle. The framework centralizes ownership of all LLMs and applies structured technical analysis of model behavior, prompt engineering, accuracy, bias, toxicity, and content safety, translating these factors into measurable cyber and business risk. The presentation highlights how siloed governance and security efforts increase enterprise risk, underscoring the need to embed security controls at the earliest stages of AI development.

• AI governance is an enterprise security discipline.

• LLM behavior directly impacts cyber and business risk.

• Centralized control enables secure global AI deployment.


img

Christopher Campbell

Director of AI Governance and Global Product and Services Security Leader
Lenovo

As autonomous AI agents proliferate across teams and platforms, organizations face growing challenges in ensuring compliance and mitigating operational and regulatory risk. This session explores strategic and technical approaches for discovering active agents, maintaining accurate inventories, and enforcing governance and risk controls.

• Strategically discovering agents across distributed environments.

• Maintaining comprehensive inventories.

• Implementing governance and risk controls without disrupting workflows.


img

Parisa Lak

Director AI Model Risk Management
Manulife

img

Arthur O'Connor

Academic Director
CUNY School of Professional Studies

img

Veer Yedlapalli

Director of Product Security, Security Engineering and AI Security
Grainger

Lunch

1:00 pm - 2:05 pm Lunch

Afternoon Tracked Sessions

Governance Track

2:05 pm - 2:35 pm Panel Discussion – Who Owns the Risk? Accountability in Responsible AI
Jeanne Michele Mariani - Counsel - AI and Data Governance, General Motors
Katina Banks - Knowledge Attorney, Gibson, Dunn & Crutcher, LLP

As AI systems move from experimentation to embedded business tools, accountability often becomes unclear. This roundtable brings leaders together to discuss who owns risk across AI design, deployment, and day-to-day use. The conversation will explore how organizations are defining accountability, building AI literacy, and aligning incentives so responsibility is clear when things go wrong.

• Assigning accountability across AI development, deployment, and use.

• Clarifying ownership between technical, legal, and business teams.

• Building AI literacy to support responsible decision-making.


img

Jeanne Michele Mariani

Counsel - AI and Data Governance
General Motors

img

Katina Banks

Knowledge Attorney
Gibson, Dunn & Crutcher, LLP

Governance Track

2:35 pm - 3:05 pm Presentation – Session Details to be Announced
Chris Meehleib - Associate Director of Responsible AI, UnitedHealth Group
img

Chris Meehleib

Associate Director of Responsible AI
UnitedHealth Group

Governance Track

3:05 pm - 3:35 pm Panel Discussion – AI on the Frontlines: Governance in Highly Regulated Sectors
Aneta Osmola - Vice President Data and AI Risk, Scotiabank
Rajiv Avacharmal - Director of Responsible AI, Prudential Financial

Autonomous AI in highly regulated industries such as healthcare and finance introduces complex risks that demand rigorous governance. This session explores how organizations implement responsible AI frameworks in critical sectors, balancing innovation, compliance, and accountability. It will highlight strategies for managing risk, ensuring regulatory alignment, and embedding oversight into AI systems that operate in high-stakes environments.

• Designing governance frameworks for highly regulated industries.
• Balancing autonomy, innovation, and regulatory compliance.
• Embedding oversight into critical AI systems and workflows.

img

Aneta Osmola

Vice President Data and AI Risk
Scotiabank

img

Rajiv Avacharmal

Director of Responsible AI
Prudential Financial

Technical Track

2:05 pm - 2:35 pm Presentation – From Approval to Adoption: Launching Business-Proven Responsible GenAI with Real-Time Monitoring
Sami Huovilainen - Managing Director - Head of Next Gen Analytics, Citi

Last year’s discussion focused on building the governance guardrails for GenAI. This year, Citi’s focus has shifted to translating those guardrails into operational outcomes - moving from model governance to the practical realities of launch and adoption. That shift requires clear prioritization of use cases, consistent navigation of risk tiers, credible proof of business value, and effective performance monitoring after go-live. As customer-facing GenAI grows more autonomous, compliance and safety assurance must evolve alongside it. The strongest programs position post-deployment monitoring as an enabler of speed, safety, and scale - not a brake on innovation.

• Prioritize the step change Gen AI driven business initiatives using practical, outcome-driven ROI metrics ($ impact driven by self- serve rate increase, AHT reduction, quality uplift, for example).

• Build launch teams tailored to each use case. These tend to be very interdisciplinary, but the structure of the “pod” varies by initiative.

• Establish governance based on risk tiers and the respective testing requirements.

• Post launch on-going, and “near real time”

• Measuring business value using the KPIs above.

• Monitoring post-deployment adherence to ensure approvals remain valid as systems scale and evolve.


img

Sami Huovilainen

Managing Director - Head of Next Gen Analytics
Citi

Technical Track

2:35 pm - 3:05 pm Presentation – From Ad Hoc to Embedded: A Journey to Operationalize Responsible AI
Cindy Tu - Director of IT & Data Audit, Capital One

How to evolve from ad hoc reviews into an embedded, repeatable framework integrating the existing risk assessment in SDLC, Model Risk Management, and Third-Party Risk Management and beyond. This session examines how acceptable-use gates and structured triage enable consistent risk decisions across diverse AI use cases, including internally developed and agentic systems that fall outside traditional MRM assumptions. It explores how acceptable risk is operationalized and recalibrated as conditions change.

• Integrating RAI into model risk, third-party risk, and SDLC controls.
• Monitoring drift with rollback and incident playbooks for reputational risk.
• Building triage and acceptable-use gates that scale across AI use cases.

img

Cindy Tu

Director of IT & Data Audit
Capital One

Generative and agentic AI systems create novel attack surfaces and operational risks for enterprises. This panel explores how technical leaders are designing secure architectures, applying threat modelling, and layering defence-in-depth to protect production AI. It focuses on embedding traceability, privacy, and validating outcomes while keeping systems reliable at scale.

• Designing secure architectures for GenAI and autonomous agents.

• Applying threat modelling to emerging AI attack surfaces.

• Implementing defence-in-depth for privacy, traceability, and control.


img

Guman Chauhan

Information Security Leader | Technical Solutions & Security Lead
State of California

img

Zachary Hanif

Head of AI, ML, and Data, VP Traffic Intelligence
Twilio

Afternoon Plenary

3:35 pm - 4:05 pm Afternoon Coffee Break

4:05 pm - 4:35 pm Afternoon Keynote – From Governance to Enablement: Turning Responsible AI into a Business Accelerator

Amit Shivpuja - Director of Data Product and AI Enablement, Walmart

Responsible AI should accelerate the business, not slow it down. In this case study, Amit Shivpuja, Director of Data Product and AI Enablement at Walmart, shares how two decades of experience in data and analytics have helped shape the shift from compliance-driven governance to AI enablement at scale. He explains how organizational goals serve as the North Star for prioritizing AI use cases, keeping teams focused on delivering real business value. This session provides practical insight into governing AI upfront while balancing innovation, utility, and enterprise priorities.

• Reframing data governance into AI enablement with organizational goals as the guiding North Star.
• Embedding guardrails, accountability, and AI literacy at the source to support scalable AI adoption.
• Focusing AI initiatives on measurable business value rather than novelty or hype.

img

Amit Shivpuja

Director of Data Product and AI Enablement
Walmart

4:35 pm - 5:05 pm Afternoon Plenary Keynote – Session details to be announced

As AI becomes embedded in core business processes, literacy can no longer sit solely within technical teams. This panel explores how organizations are building practical AI understanding across business functions, recognizing how non-technical teams shape AI outcomes. The discussion will focus defining what “good enough” AI literacy looks like and how shared understanding strengthens governance, accountability, and better decision-making.

• Treating AI literacy as a core business capability.
• Strengthening governance through shared understanding.
• Recognising how non-technical teams shape AI outcomes.

img

Laurence Audrey Vincent

Director, Data & AI Governance & Adoption | Head of Privacy
ALDO Group

img

Kerry Barker

Senior Director of AI Governance
Sony PlayStation

img

Jasmine Luedke

Senior Clinical Projects GCA / AI Champion
Natera

5:35 pm - 5:45 pm Chairs Closing Remarks & End of Conference Day 1

5:45 pm - 5:45 pm Evening Networking Drinks Reception