AI regulation in the United States is emerging as a patchwork of state laws, federal guidance, and sector-specific rules rather than a single national framework. This panel brings together voices from government, policy, and industry to discuss what is taking shape, what remains uncertain, and which requirements are likely to endure. The conversation focuses on how organizations can operationalise responsible AI governance today and continue to push innovation without waiting for regulatory clarity that may never fully arrive.
• Understanding the practical impact of state, federal, and sector rules.
• Continuing to innovate amid regulatory uncertainty.
• Translating policy signals into enterprise-ready operating models.
Led by Anita Rao, Senior Policy Advisor at NIST, this interactive workshop draws on NIST’s work exploring challenges organizations face when measuring AI systems post-deployment.
In highly regulated industries, the hardest part of AI is not building models - it’s proving you’re in control. Scotiabank is exploring how to measure and report AI risk in a way that stands up to regulators, especially as agentic systems and third-party AI services accelerate. This session unpacks the practical challenges of balancing mandatory compliance, vendor accountability, and speed-to-market, while building organisational trust, resilience, and security marked as “responsible”.
AI agents blur the boundary between technology and workforce: they operate within rules, yet exercise discretion in execution. This panel debates whether agentic systems should be governed like software, like employees, or through a new hybrid accountability model. Panellists examine how organizations define risk ownership and prioritize privacy when outcomes are unpredictable and failures carry operational, reputational, and regulatory consequences.
The rapid evolution of agentic AI, from single LLM-powered agents to coordinated crews and massive swarms, promises transformative autonomy in domains like supply chain, healthcare, and finance. Frameworks like CrewAI, LangGraph, and Google's ADK, combined with the Model Context Protocol (MCP) standard, enable dynamic agent-to-agent (A2A) collaboration and tool access. Yet this interconnected ecosystem introduces severe cascading risks: a single compromised agent can poison swarms via unsecured MCP calls, leading to data exfiltration, unauthorized actions, or ethical failures.
Scaling AI across a workforce introduces real challenges around accountability, trust, and responsible use. In this session, Mark Lesiw, AI Enablement Program Owner at Xcel Energy, shares how Xcel Energy increased active enterprise AI usage from 33% to over 90% across 27,000 employees in under 12 months. Mark introduces Return on Employee, a term he uses to define AI value across a workforce, focusing on productivity, confidence, and decision quality, alongside traditional Return on Investment. He explains how Responsible AI was embedded from the start through literacy, tool-level guardrails, and human-in-the-loop design so that AI use could scale safely and deliver lasting business value.
Generative AI is advancing from drafting alerts to actively supporting cyber defence operations. This session explores real-world applications of threat detection, cyber hygiene, attack path analysis, and orchestrated response. It highlights practical insights into how organizations are using AI to responsibly enhance detection and strengthen cyber resilience.
Deploying AI in humanitarian and crisis settings requires systems that adapt to local realities without compromising responsibility. In this presentation, André Heller shares how the International Rescue Committee embedded aprendIA, an AI-powered chatbot, to support educators in Nigeria while navigating stakeholder needs, data protection laws, and resource constraints. The session explores how global responsible AI principles can be translated into practical, locally grounded deployments that scale safely across regions.
• Adapting AI systems to local cultural, regulatory, and operational contexts.
• Scaling AI solutions efficiently in constrained-resource environments.
• Applying global responsible AI guardrails to real-world humanitarian deployments.
AI systems are now deeply embedded in core products, workflows, and decision-making, making governance a daily operating requirement, not a periodic review exercise. This panel examines how organizations are building cross-functional AI governance councils that are technically relevant, fast enough to keep pace with delivery teams, and clear on who has a seat and why. Speakers will break down how roles are defined in practice, how councils operate beyond high-risk use cases, and how governance models are built without forcing legacy policies onto AI systems that behave fundamentally differently.
As agents and GenAI systems gain autonomy, traditional testing no longer exposes the highest-risk failures. This presentation explores adversarial testing techniques used to surface jailbreaks, tool misuse, data leakage, and unsafe autonomous behaviour. It focuses on how technical teams operationalise red teaming as a continuous capability rather than a one-time exercise.
Unlike traditional software, agentic AI systems do not just execute instructions: they reason, act, and adapt through persistent interaction with tools and APIs. In this session, the speaker examines how this shift fundamentally breaks long-standing cybersecurity assumptions, particularly the reliance on prompt-based guardrails to control LLMs against attacks. They explore why guardrails are structurally insufficient against adaptive adversaries, and what it realistically means to defend systems where autonomy itself expands the attack surface.
• How agent infrastructure expands the cybersecurity attack surface.
• Why finite guardrails fail against adaptive prompt injection.
• Reframing AI security in the age of agentic.
As U.S. state AI laws accelerate, organizations are increasingly caught between new AI-specific mandates and existing state privacy obligations. For the medical technology sector, the tension is acute: AI innovation relies on data use and third-party vendor relationships that often outpace existing consent and privacy models. In this session, Nereida Parks, AI Regional Privacy and HIPPA Officer at Olympus Corporation, unpacks the legal grey areas where state AI laws, privacy statutes, and vendor accountability collide. The discussion offers practical ways to navigate no-win compliance scenarios where risk must be actively managed, and decisions must be taken without harmonized legal guidance.
Moving from principle to practice, organizations face growing pressure to translate assurance, transparency and security into day-to-day decision-making. This closing panel focuses on what comes next, highlighting practical actions that organizations can take to govern AI systems.