As AI adoption outpaces formal regulation in the U.S., large enterprises are being forced to lead on responsible AI without a prescriptive rulebook. This opening panel sets the tone for 2026, exploring how risk-based governance, self-regulatory commitments and evidence-driven decision-making can unlock innovation, rather than constrain it.
• Balancing innovation and accountability by applying risk-based governance models that scale across diverse AI use.
• Embedding practical standards and measurable evidence of responsible AI in action.
• Accelerating enterprise-wide AI enablement in a largely unregulated American landscape.
AI agents are moving from assistance to action by planning, executing, and triggering outcomes across business systems. This plenary examines the challenges organizations face as agentic AI enters core operating environments, including unclear accountability and expanding risk surfaces. Speakers will share practical approaches to governing AI agents responsibly while maintaining efficiency, control, and trust.
• Defining accountability for autonomous agent actions.
• Managing risk as agents operate across systems.
• Maintaining oversight without slowing execution.
AI adoption is outpacing governance, particularly in the U.S., where AI regulation is inconsistent across states and remains unsettled at the federal level. In this session, Matt Bedsole, Director of AI Governance at Lowe’s, shares how Lowe’s is building a scalable Responsible AI governance program anchored in practical software lifecycle controls, security and privacy reviews, and end-to-end traceability. Matt explains how Lowe’s is enabling rapid AI adoption while maintaining accountability and trust, with a clear focus on empowering the workforce and driving business value, not reducing headcount.
Invisible AI introduces a governance challenge: systems that continuously sense, infer, and act without explicit user interaction stretch traditional privacy and Responsible AI frameworks. When AI operates ambiently, responsibility shifts from user choice to organizational accountability, requiring stronger internal controls, oversight, and risk ownership. This panel focuses on how leaders can operationalize Responsible AI to manage privacy at scale when AI is embedded everywhere.
• Governance replaces user-led consent.
• Responsible AI requires inference accountability.
• Privacy must be enforced systemically.
Unlike traditional IT, AI systems don’t just follow rules. They learn, adapt, and change based on how we interact with them. In launching Mars’ AI for product innovation, Brandel has firsthand experience overcoming governance setbacks and unanswered questions that traditional IT models were never designed to handle. She shares hard-won learnings from monitoring, controlling, and governing learning AI at scale at Mars.
As enterprises scale AI across products and regions, governance has emerged as a fundamental security challenge rather than a compliance exercise. Christopher Campbell, Director of AI Governance and Global Product Security Lead at Lenovo, will outline how he built an AI governance framework that embeds security controls across the entire AI lifecycle. The framework centralizes ownership of all LLMs and applies structured technical analysis of model behavior, prompt engineering, accuracy, bias, toxicity, and content safety, translating these factors into measurable cyber and business risk. The presentation highlights how siloed governance and security efforts increase enterprise risk, underscoring the need to embed security controls at the earliest stages of AI development.
• AI governance is an enterprise security discipline.
• LLM behavior directly impacts cyber and business risk.
• Centralized control enables secure global AI deployment.
As autonomous AI agents proliferate across teams and platforms, organizations face growing challenges in ensuring compliance and mitigating operational and regulatory risk. This session explores strategic and technical approaches for discovering active agents, maintaining accurate inventories, and enforcing governance and risk controls.
• Strategically discovering agents across distributed environments.
• Maintaining comprehensive inventories.
• Implementing governance and risk controls without disrupting workflows.
As AI systems move from experimentation to embedded business tools, accountability often becomes unclear. This roundtable brings leaders together to discuss who owns risk across AI design, deployment, and day-to-day use. The conversation will explore how organizations are defining accountability, building AI literacy, and aligning incentives so responsibility is clear when things go wrong.
• Assigning accountability across AI development, deployment, and use.
• Clarifying ownership between technical, legal, and business teams.
• Building AI literacy to support responsible decision-making.
Autonomous AI in highly regulated industries such as healthcare and finance introduces complex risks that demand rigorous governance. This session explores how organizations implement responsible AI frameworks in critical sectors, balancing innovation, compliance, and accountability. It will highlight strategies for managing risk, ensuring regulatory alignment, and embedding oversight into AI systems that operate in high-stakes environments.
Last year’s discussion focused on building the governance guardrails for GenAI. This year, Citi’s focus has shifted to translating those guardrails into operational outcomes - moving from model governance to the practical realities of launch and adoption. That shift requires clear prioritization of use cases, consistent navigation of risk tiers, credible proof of business value, and effective performance monitoring after go-live. As customer-facing GenAI grows more autonomous, compliance and safety assurance must evolve alongside it. The strongest programs position post-deployment monitoring as an enabler of speed, safety, and scale - not a brake on innovation.
• Prioritize the step change Gen AI driven business initiatives using practical, outcome-driven ROI metrics ($ impact driven by self- serve rate increase, AHT reduction, quality uplift, for example).
• Build launch teams tailored to each use case. These tend to be very interdisciplinary, but the structure of the “pod” varies by initiative.
• Establish governance based on risk tiers and the respective testing requirements.
• Post launch on-going, and “near real time”
• Measuring business value using the KPIs above.
• Monitoring post-deployment adherence to ensure approvals remain valid as systems scale and evolve.
How to evolve from ad hoc reviews into an embedded, repeatable framework integrating the existing risk assessment in SDLC, Model Risk Management, and Third-Party Risk Management and beyond. This session examines how acceptable-use gates and structured triage enable consistent risk decisions across diverse AI use cases, including internally developed and agentic systems that fall outside traditional MRM assumptions. It explores how acceptable risk is operationalized and recalibrated as conditions change.
Generative and agentic AI systems create novel attack surfaces and operational risks for enterprises. This panel explores how technical leaders are designing secure architectures, applying threat modelling, and layering defence-in-depth to protect production AI. It focuses on embedding traceability, privacy, and validating outcomes while keeping systems reliable at scale.
• Designing secure architectures for GenAI and autonomous agents.
• Applying threat modelling to emerging AI attack surfaces.
• Implementing defence-in-depth for privacy, traceability, and control.
Responsible AI should accelerate the business, not slow it down. In this case study, Amit Shivpuja, Director of Data Product and AI Enablement at Walmart, shares how two decades of experience in data and analytics have helped shape the shift from compliance-driven governance to AI enablement at scale. He explains how organizational goals serve as the North Star for prioritizing AI use cases, keeping teams focused on delivering real business value. This session provides practical insight into governing AI upfront while balancing innovation, utility, and enterprise priorities.
As AI becomes embedded in core business processes, literacy can no longer sit solely within technical teams. This panel explores how organizations are building practical AI understanding across business functions, recognizing how non-technical teams shape AI outcomes. The discussion will focus defining what “good enough” AI literacy looks like and how shared understanding strengthens governance, accountability, and better decision-making.