The EU AI Act is setting the pace for AI regulation, but it is far from the only framework shaping enterprise adoption. The UK has opted for a more sector-led model, the US is layering federal guidance with state-level rules, and governments worldwide are weighing innovation against oversight. For global organisations, the challenge is clear: how to operationalise compliance across multiple jurisdictions while still driving competitive advantage. This panel explores the state of play, key risks, and practical steps for building resilient governance.
• Unpacking the EU AI Act and its implications for high-risk systems
• Comparing approaches across the UK, US, and other major jurisdictions
• Building agile compliance strategies that enable global AI innovation at scale
Leave with clarity on regulatory trends and actionable insights for cross-border compliance.
As AI systems become increasingly autonomous, governance cannot remain static. This panel brings together leaders from highly regulated industries to explore how they are embedding dynamic guardrails into AI operations, ensuring safety and accountability without stalling innovation.
• Operationalising governance for multi-agent and self-directed workflows
• Meeting evolving regulatory standards across global jurisdictions
• Balancing innovation speed with ethical and compliance frameworks
Learn how to design governance models that evolve with the pace of AI autonomy.
GenAI is transforming critical sectors, from personalised learning to clinical research. This panel highlights real-world deployments and the strategies required to deliver impact responsibly.
• Scaling AI in public services without widening digital inequality
• Measuring social impact beyond commercial KPIs
• Partnering across ecosystems to accelerate innovation in high-impact sectors
Leave with practical insights into using AI as a force for positive societal change.
This panel brings together industry leaders and compliance experts to discuss what governance will look like as AI becomes autonomous - and how organisations can harmonise policies across global markets.
• Understanding regulatory trends shaping the next five years
• Aligning enterprise governance with multi-region compliance demands
• Preparing for cross-industry collaborations on responsible AI standards
Leave with a clear view of the regulatory horizon and practical steps to future-proof your governance strategy.
AI investment is accelerating but proving a tangible near-term business value remains a challenge. This panel brings together enterprise AI leaders to unpack how to garner continuous internal buying, collaborate across non-technical functions, and position AI as a core business enabler - not just a technical experiment.
• Defining and measuring the real impact of AI investments: cost, productivity, revenue, and beyond
• Securing executive support and funding for AI products in competitive prioritisation environments
• Communicating value to non-technical stakeholders and navigating organisational AI illiteracy
• Embedding AI thinking across product, operations, and business strategy
• Evolving your role from technical lead to enterprise AI change agent
As financial institutions race to adopt GenAI, the biggest risks aren’t just in the models - they’re in how those models are applied. At Danske Bank, Khaled is building a specialist model evaluation function to do just that: connect technical robustness to practical deployment, bridging the gap between regulation, risk and real-world GenAI use cases. With a mandate to evaluate GenAI applications within a three-lines-of-defence framework, and to interpret evolving EU AI Act requirements, this session shares a first-mover perspective on what responsible GenAI adoption really looks like in modern banking.
• Evaluating GenAI applications - not just models - within enterprise risk frameworks
• Interpreting and applying EU AI Act principles to real-world GenAI deployments
• Building a new kind of MRM function: beyond credit models, into AI-enabled operations
For many financial services firms, the biggest barrier to GenAI deployment isn’t capability - it’s compliance. In response, the Financial Conduct Authority has launched AI in Live Testing: a first-of-its-kind regulatory initiative that allows firms to trial real-world GenAI use cases with consumers, under supervision and with clear assurance mechanisms. In this session, Ed shares early insights from the programme, and how the FCA is enabling innovation with guardrails, helping firms move from POC to production with confidence, not caution.
• Enabling safe GenAI deployment through the FCA’s supervised ‘Live Testing’ environment
• Supporting firms in building scalable assurance and governance from the start
• Learning from live implementations: what’s working, what’s not, and what’s next
As AI adoption accelerates, so does its environmental impact. At the Metropolitan Police, Johnny is helping to shape a future where sustainability is woven into enterprise architecture - connecting emerging AI capabilities to the realities of energy use, cloud consumption, and carbon impact. As a member of the Green Software Foundation and a leading voice in the UK’s push for Green AI standards, he offers a timely and important perspective: how public institutions can pursue innovation responsibly, through architectures that support both intelligence and impact reduction.
• Connecting AI strategy to carbon reduction: why Green AI must be part of the plan
• Designing for sustainability: cloud, compute and governance choices that reduce emissions
• Aligning with Green Software Foundation standards to future-proof responsible public AI
With a rich background in research, academia, startups and at AWS, Bogdan joined The Economist to do something bold: build AI infrastructure where none existed. In this session, he shares how he's helped transform a landscape of fragmented ideas into a functioning AI platform - reducing model deployment time from six months to minutes, automating ML workflows, and preparing the organisation to responsibly integrate generative and agentic AI. It’s not just about models - it’s about people, processes and a culture that can evolve as fast as the tech.
• Operationalising GenAI: moving from experimentation to production-ready infrastructure
• Building sustainable, scalable AI workflows that balance agility with governance
• Managing culture change: enabling teams without rushing into every shiny new tool
As Chief Technology Officer at V.Group, Jeremy is leading efforts to harness AI and GenAI across one of the world’s most complex and global industries. With vast proprietary datasets and operations spanning crews at sea to on-shore command centres, the shipping sector presents both enormous opportunities and unique challenges for AI adoption. Jeremy will share how V.Group is approaching innovation in this space - from early proofs of concept using specialised models to unlock unstructured data, to predictive systems that transform compliance and risk management.
• Exploring how shipping’s complexity shapes AI opportunities and challenges at scale.
• Moving from proofs of concept to applied AI that delivers measurable operational impact.
• Building data access and trust across crews, shore operations, and external stakeholders.
Generative AI was the first leap, autonomy is the second - so what’s next? This keynote offers a forward-looking view on neuro-symbolic AI, human-AI symbiosis, and the policy, compute, and ethics challenges shaping the next decade.
• Tracking emerging models and architectures beyond LLMs
• Preparing for convergence across AI, IoT, and intelligent automation
• Anticipating next-generation risks, from sustainability to weaponisation
Leave with a long-term perspective to guide enterprise investment and innovation strategies.