Responsible AI cannot succeed as a collection of isolated policies and private frameworks. Yet competitive pressure, liability concerns, regulatory uncertainty, and reputational risk often limit how openly organisations collaborate. As AI systems grow more powerful and interconnected, the real question is no longer whether we need a Responsible AI community, but how that community genuinely wants to function.
AI’s rapid expansion carries a growing environmental footprint, from data centre infrastructure and hardware supply chains to model training, inference, and everyday user behaviour. Yet much of AI’s carbon impact remains opaque, distributed, and poorly measured. As adoption accelerates, organisations must ask a harder question: what is the true sustainability cost of AI, and who owns it?
Responsible AI is full of high stakes decisions and uncomfortable trade offs, and this session brings them into the open. Participants will engage with a series of provocative statements on today’s most contentious AI issues, ranging from frontier scaling to synthetic data, alignment risks, liability boundaries, open source governance, model interpretability, and beyond. After hearing each statement, attendees will split into groups, each tackling one hot button topic. Within each cohort, participants will debate both the affirmative and the opposing positions, then present their strongest arguments back to the room.
• Surfacing competing priorities that shape responsible AI decisions.
• Challenging assumptions through structured, balanced debate.
• Bridging technical, ethical, and strategic perspectives for better AI governance.
For an AI agent, even more than other systems, “Responsible AI" is indistinguishable from "working correctly. “As a world leader in responsible AI, Mastercard is advancing governance for the next era of autonomous systems. As AI evolves from predictive tools to agentic systems that act independently, governance must evolve with equal clarity and precision. This session discusses the additional risks agents bring above traditional and generative AI, including Mastercard’s practical approach to managing these at scale.
• The case for Responsible Agentic AI.
• Agentic Risks and Risk Taxonomy.
• Developing Trustworthy, Responsible and Safe AI Agents
AI literacy is the operating layer of responsible AI: shaping how teams interpret insights, exercise judgment, and maintain human oversight as AI becomes embedded in workflows. Moving beyond static adoption metrics, organisations must focus on cultural readiness and measurable capability shifts that influence governance and decision quality. This session explores practical approaches to literacy and culture so that AI deployment strengthens accountability and operational performance rather than simply increasing usage.
• Measuring literacy and decision impact beyond adoption metrics.
• Cultivating culture and shared understanding for responsible outcomes.
• Integrating literacy with operations to strengthen oversight and judgment.
The EU AI Act mandates human oversight, yet in practice, this often becomes a junior staff becoming an “AI checker” rubber-stamping automated outputs. In this candid session, Dr David Crelley, Head of Responsible AI & Data at Admiral Group, challenges the compliance-driven interpretation of human-in-the-loop and examines why oversight designed for efficiency frequently undermines effectiveness and accountability. He will share how his team is rethinking oversight as proactive engagement, using judge LLMs to do the hard lifting, deliberately introducing friction, cultural ownership, and meaningful intervention points into AI-enabled processes. David offers practical insight into how to design oversight models that create genuine human engagement rather than passive validation.
• Moving from passive validation to active engagement.
• Using LLMs to do the basic checks.
• Designing friction to strengthen human judgement.
• Embedding cultural ownership into AI oversight.
Every organisation says data is strategic, yet most still run on brittle pipelines, stale datasets, and underfunded infrastructure. As automation, real time AI systems, and agentic workflows explode, the gap between what models need and what data teams receive is widening fast. In this panel, speakers will unpack what “automated data” truly requires today, how to quantify its ROI, and how to build foundations that make Responsible AI actually possible.
Accuracy scores and leaderboard rankings only tell a small part of the story. In real environments, LLMs can be influenced, manipulated, and pushed into failure modes that standard evaluation simply doesn't capture. This session focuses on how LLMs behave once deployed, especially from a risk and security perspective. Drawing on practical examples such as prompt injection, RAG poisoning, and everyday failure patterns, we'll explore where current evaluation approaches fall short and what organisations should be testing instead. Rather than asking "is the model accurate?", the session reframes the question to: "how does the model behave under pressure, and where does it break?"
• Understanding how LLMs can be influenced and where real risks emerge.
• Identifying failure modes such as prompt injection, RAG poisoning, and hidden vulnerabilities.
• Rethinking evaluation to reflect real usage, adversarial conditions, and enterprise risk.
As AI systems become more conversational, expressive, and agentic, the push to humanise them is accelerating. In China, widespread deployment of highly human-like AI across daily life has already triggered regulatory action aimed at curbing over-anthropomorphism and protecting users from misplaced trust. Europe now has a choice: learn early or regulate late. In this session, Sarah Mathews, Group Responsible AI Manager at Adecco Group, explores what responsible design looks like before AI systems blur the line between tool and perceived actor, and how transparency, human-centricity, and governance must evolve as agentic capabilities scale.
• Anticipating regulatory risk from humanised AI systems.
• Designing transparency into increasingly agentic experiences.
• Protecting human agency as AI embeds into daily life.
As AI increasingly shapes core product, operational and strategic decisions, many organisations still treat ethics as a policy artefact or compliance exercise - often detached from the reality of day to day decision-making. Roos Brekelmans, Digital Innovation Accelerator at Vattenfall, explores the ethical, strategic and trust risks created by this gap. Rather than framing ethical AI as a control or governance problem, she reframes it as a practical business discipline - and a conversation skill. Drawing on hands-on innovation experience and academic foundations in technology ethics, this session moves beyond abstract principles to focus on concrete, repeatable practices that teams and leaders can embed directly into their innovation pipelines. Using AI responsibly is positioned not as a brake on progress, but as a lever for resilient, scalable, and strategically sound innovation.
• Embedding ethics into everyday and strategic business decisions.
• Using values to navigate real AI trade-offs under uncertainty.
• Linking responsible innovation directly to enterprise value and trust at scale.
As AI systems scale, the human labour powering them, data annotators, content moderators, and crowd workers, becomes operationally invisible. Yet laws such as the Corporate Sustainability Due Diligence Directive make oversight of supply chains a legal obligation, not a reputational choice. Ethical risk in AI no longer sits only within the model; it extends across global labour networks that train, filter, and sustain these systems.
This panel examines why auditing your AI supply chain matters, and what meaningful accountability looks like in practice. Speakers will highlight how standards-based evaluation, like Fairwork AI, can expose hidden labour risks, benchmark working conditions, and create enforceable transparency. The discussion moves beyond awareness to explore due diligence, procurement leverage, and how organisations can align AI deployment with fair work principles under emerging regulatory pressure.
• Exposing hidden labour across AI supply chains.
• Using certification frameworks to benchmark labour standards.
• Embedding due diligence into AI procurement decisions.
Generative AI is accelerating across insurance, bringing new capabilities and new risks to underwriting, claims, and customer operations. In this evolving environment, model risk functions must ensure that governance frameworks, validation processes, and lifecycle controls can adapt without losing rigour.
As generative AI embeds across enterprise systems, bias becomes structural: shaping decisions, communications, and customer outcomes at scale. What begins as a model limitation can quickly become a systemic risk, reinforced by data provenance, prompt design, feedback loops, and organisational incentives. This panel takes a critical look at bias as a socio-technical challenge spanning the full AI lifecycle. Moving beyond surface-level fairness claims, panellists will examine how to measure structural and emergent bias in production, define meaningful accountability, and govern generative systems that continuously adapt and influence behaviour.
• Examining bias across the AI lifecycle.
• Measuring structural and emergent bias in production.
• Embedding accountability beyond technical fixes.
“Human in the loop” emerged as a human centric safeguard: a way to keep people visibly involved in AI workflows and preserve human judgement at critical points. But as AI systems become increasingly agentic and capable of shaping the workflow itself, this model often reduces the human role to validation, monitoring, or procedural compliance. In this session, Bogdan Vrusias, Global Head of AI and Data Engineering at The Economist, examines how HITL can unintentionally narrow human agency rather than reinforce it. He argues for reframing the loop so that AI sits within a broader human process, one grounded in contextual reasoning, editorial discretion, and genuine ownership. The session offers a more honest and future proof approach to ensuring that humans guide the system, not the reverse.
• Revealing how HITL often constrains, rather than expands, human decision space
• Reframing AI as embedded in human processes, not humans inside machine loops
• Designing agentic-era workflows that preserve judgement, context, and real agency
As AI systems become more capable, organisations face a critical strategic question: is automation always the answer? What should be automated, and what should remain human? The line between efficiency and overreach is becoming harder to see, especially as “agentic” systems begin making decisions over time, not just executing predefined tasks. This plenary panel moves beyond technical definitions to examine boundaries. Where does useful automation end and risky autonomy begin? When does delegating decision-making create value, and when does it erode accountability, resilience, or trust? Leaders will explore not just what is possible, but what is appropriate, sustainable, and strategically aligned.