Frontier AI capabilities continue to leap forward, reshaping expectations and recalibrating what “state of the art” means almost monthly. Yet inside most organisations, adoption remains steady, deliberate, and constrained by governance, infrastructure, and readiness. This widening gap creates strategic confusion: are we preparing for a world that’s already here, or bracing for breakthroughs that won’t materialise evenly?
This session aims to disentangle technical progress from real world absorption capacity, clarify where agentic systems may genuinely transform workflows, and offer a pragmatic lens for prioritising what matters amid hype cycles.
• Accelerating capability growth outpacing enterprise and policy absorption curves.
• Harnessing agentic systems where frontier advances drive real operational uplift.
• Prioritising durable shifts while filtering out performative or premature noise.
Talk Details to be Announced
AI has moved beyond proof-of-concept, but the gap between ambition and sustained enterprise value remains. Scaling successfully depends not just on the technology itself but on how organisations think about it - cutting through hype, building genuine literacy, and putting the right governance in place so that agentic AI can deliver real commercial outcomes. In this keynote, Dr Daniel Hulme, Chief AI Officer at WPP, will offer a practical framework for thinking about AI and agentic technologies - where the genuine opportunities lie, and where organisations risk being seduced by inflated expectations.
• A practical framework for AI and agentic adoption that cuts through the hype.
• Why governance and literacy are prerequisites for scaling - and how to frame that in terms the C-suite values.
• Augmenting, not replacing using AI and agents to unlock creative and commercial potential.
• The broader implications of these technologies for business and society.
AI regulation is accelerating, yet many organisations still struggle to translate legal obligations into effective, day to day governance. Under the EU AI Act and emerging standards frameworks, organisational functions including Legal and Governance are collectively responsible for risk classification, oversight, documentation, and organisational controls, but often approach these duties from different angles. This session explores how Legal and Governance teams can work together in practice: aligning responsibilities, streamlining AI use case triage, and building a shared governance model that stands up to regulatory scrutiny.
• Aligning legal and governance roles through shared ac-countability models
• Improving AI use case filtering to focus oversight where it matters most
• Strengthening collaboration by clarifying what Governance needs from Legal to operationalise compliance
AI driven innovation is accelerating, and with it, a surge of new tools, datasets, and experimental vendors vying for a place in the enterprise. Every promising partnership also carries potential privacy, security, and regulatory exposure. In this session, Lucia Batlova, EMEA Data Protection & Privacy Lead at Lenovo, reveals how teams in a global organization cut through the noise: rapidly assessing high volume third party requests, screening risky AI initiatives without slowing momentum, and enabling safe experimentation at scale. With real world examples, she shows how legal can stay firmly positioned as an innovation accelerator, while keeping risk firmly in check (considering the fragmented global AI landscape & insufficiency of traditional privacy and security frameworks).
• Building scalable screening processes
• Embedding privacy by design into teams
• Automating and operationalising third party oversight
As the EU AI Act moves into its implementation phase, the focus is shifting from legislative ambition to operational delivery. Organisations must interpret risk classifications, conformity assessments and oversight duties, while aligning internal governance structures. This session offers a clear and practical update on timelines, enforcement trends and what regulators expect.
• Clarifying risk tiers and governance responsibilities.
• Aligning internal controls with supervisory scrutiny.
• Preparing for documentation, audit and enforcement readiness.
In this technical case study, Alessandro Castelnovo, Head of Responsible AI at Intesa Sanpaolo, details how the bank designed and operationalised Guardian Agents to govern emerging multi-agent AI ecosystems. He presents a formal taxonomy that combines three operational roles (Reviewers, Monitors, and Protectors) with five structured risk domains: Data Security & Protection; Performance & Reliability; Quality & Compliance; Explainability & Transparency; and Ethical Coordination & Decisioning. The framework clarifies responsibilities, embeds automated safeguards, enables continuous oversight, and supports accountable, resilient orchestration across complex agent networks.
• Formalising operational roles define structured AI oversight mechanisms.
• Creating five risk domains that align governance with concrete control layers.
• Embedding safeguards to enable resilient, accountable multi-agent orchestration.
AI systems in high-stakes domains such as consumer finance must remain reliable as data distributions shift, regulations evolve, and new patterns emerge at scale. In this session, Dr Stuart Burrell, Director of AI Research & Innovation and Dr Maeve Madigan, Research Scientist at Visa share advances in building adaptive AI systems that maintain both performance and fairness in production environments. Drawing on research across fraud detection, credit decisioning, and vision–language models, the session explores three core challenges: how bias can emerge as a collective property in multi-agent systems, how standard test-time adaptation methods may amplify disparities under distribution shift, and how fairness evaluation must account for domain constraints such as extreme class imbalance and the dual goals of protection and service methods that improve fairness without requiring model retraining, demonstrating how principled fairness research can deliver measurable reductions in bias while maintaining performance at global payments scale.
As enterprises delegate decisions to autonomous systems, they must confront a deeper question: what does it mean to transfer authority without severing responsibility? Agentic AI challenges traditional notions of control, oversight, and accountability, forcing organisations to redefine human agency in operational terms. This panel explores the philosophical and practical dimensions of delegation, and the skills required to remain meaningfully responsible for systems that act on our behalf.
• Examining delegation without severing human responsibility.
• Redefining agency in autonomous enterprise systems.
• Building skills for accountable human oversight.
Agentic AI represents the next stage of enterprise AI—moving beyond copilots to systems that can plan, reason, and act within workflows. For organizations like Shell, operating in complex, safety-critical, and highly regulated environments, deploying such systems requires a strong focus on responsible design, governance, and transparency. This session will explore how large enterprises can adopt agentic AI while maintaining trust, accountability, and human oversight.
• Defining Agentic AI: what differentiates agents from traditional chatbots and copilots in enterprise settings.
• Enterprise use cases: how agentic systems can support engineering knowledge, operations, and complex decision-making.
• Responsible autonomy: designing bounded agents with clear scopes, guardrails, and human-in-the-loop oversight.
• Governance and observability: ensuring traceability, auditability, and compliance for AI-driven actions.
• Scaling responsibly: lessons for deploying agentic AI safely across a global organization.
The EU AI Act mandates human oversight, yet in practice, this often becomes a junior staff becoming an “AI checker” rubber-stamping automated outputs. In this candid session, Dr David Crelley, Head of Responsible AI & Data at Admiral Group, challenges the compliance-driven interpretation of human-in-the-loop and examines why oversight designed for efficiency frequently undermines effectiveness and accountability. He will share how his team is rethinking oversight as proactive engagement, using judge LLMs to do the hard lifting, deliberately introducing friction, cultural ownership, and meaningful intervention points into AI-enabled processes. David offers practical insight into how to design oversight models that create genuine human engagement rather than passive validation.
• Moving from passive validation to active engagement.
• Using LLMs to do the basic checks.
• Designing friction to strengthen human judgement.
• Embedding cultural ownership into AI oversight.
As organisations scale AI, tensions often emerge between established data governance teams and newly formed AI governance functions. Overlapping mandates can create gaps or unclear accountability. This session takes a practical look at how leading enterprises are defining boundaries, integrating responsibilities, and building operating models that make data and AI governance work together in practice.
• Clarifying mandates between data and AI governance.
• Aligning ownership across the model lifecycle.
• Designing operating models that avoid duplication.
In highly regulated sectors, such as life sciences, regulatory changes create a hidden risk: compliance drift, beyond the usual data drift. Hence, there is a need for practical strategies to keep AI aligned with evolving regulations and operational realities. This session will present the AI governance setup in the Research and Development area of Novo Nordisk. It will cover ideas, challenges, and practical solutions for automating AI governance without correspondingly scaling human labour. The presentation will include how to move from regulation to automatable requirements and processes that enable the use of rule-, chatbot-, and agent-based automation. It will also address how to prevent compliance drift, how to make automation complement human labour, and how to create oversight of AI systems and visualise AI system interdependencies.
Explainability is often treated as a compliance afterthought; at Wiley, it is a system design requirement. At the scale of one of the world’s largest scholarly publishers, that means AI outputs must be traceable, transparent about what models are trained on, clear on where human intervention sits in the workflow, and explicit about how updates and corrections are managed over time. In this session, Pascal Hetzscholdt, Director of AI Strategy and Content Integrity, provides a candid, technical look at how enabling students and researchers to prompt directly against curated, licensed content materially reduces hallucinations. He will address realities rarely discussed publicly: cost pressures that drive unseen model changes, quality drift when models switch, and the operational discipline required to maintain standards. The session explores how to secure meaningful quick wins while protecting long- term information integrity and strengthening institutional situational awareness.
As AI systems scale and grow in complexity, the lines between engineering responsibility and governance oversight can blur. This panel explores where frictions emerge, how technical and governance teams can collaborate effectively, and whether current role definitions are fit for purpose. Through real-world examples, the discussion will probe whether enforcing policy-as-code and defining accountability truly resolves tension.
• Assigning clear roles to reduce friction and evolving with system complexity.
• Embedding governance into practice through technical pipelines.
• Integrating monitoring and checks to enforce accountability in real time.
AI transformation is not a technology upgrade; it is a strategic choice about who you are becoming. In a rapidly expanding AI ecosystem, organisations must define direction before scaling adoption, ensuring investments drive measurable value and sustained competitive relevance. This closing panel explores how leaders align AI ambition with profitability, brand visibility, and responsible growth. The conversation moves beyond experimentation to long-term positioning in an increasingly crowded and fast-moving market.
• Defining strategic direction before scaling AI.
• Prioritising measurable value over chasing AI hype.
• Strengthening brand visibility within evolving AI ecosystems.