Last year’s discussion focused on building the governance guardrails for GenAI. This year, Citi’s focus has shifted to translating those guardrails into operational outcomes - moving from model governance to the practical realities of launch and adoption. That shift requires clear prioritization of use cases, consistent navigation of risk tiers, credible proof of business value, and effective performance monitoring after go-live. As customer-facing GenAI grows more autonomous, compliance and safety assurance must evolve alongside it. The strongest programs position post-deployment monitoring as an enabler of speed, safety, and scale - not a brake on innovation.
• Prioritize the step change Gen AI driven business initiatives using practical, outcome-driven ROI metrics ($ impact driven by self- serve rate increase, AHT reduction, quality uplift, for example).
• Build launch teams tailored to each use case. These tend to be very interdisciplinary, but the structure of the “pod” varies by initiative.
• Establish governance based on risk tiers and the respective testing requirements.
• Post launch on-going, and “near real time”
• Measuring business value using the KPIs above.
• Monitoring post-deployment adherence to ensure approvals remain valid as systems scale and evolve.
Check out the incredible speaker line-up to see who will be joining Sami.
Download The Latest Agenda