Why Responsible AI Summit?

The Responsible AI Summit was first launched in 2023.

The idea came off the back of our Generative AI Summit where a significant proportion of the discussion, and a key barrier in moving AI projects into production, was focused on the risks of Generative AI – such as hallucinations, security concerns, and IP leakage.

A lot of the initial research was focused on what exactly to call the event. Ethical AI felt too academic. AI Governance? Too narrow. AI Safety? Too focused on catastrophic risk. Trustworthy AI? Too woolly.

We knew we wanted this to be an enterprise event - grounded in real-world application and case studies, not abstract ideals. Responsible AI captured that best. It reflected the balance our audience was grappling with, asking how to drive innovation meanwhile minimising risk for businesses deploying AI. The first event ran before the EU AI Act was launched and saw a niche group of 70+ people join us in London. While it was a small group, feedback from the event was great and centred on the fact that this community were delighted to finally find an event for themselves, rather than just a few Responsible AI topics in a wider AI conference. Fast forward to 2024 and the event more than doubled in size – reflecting the increasing importance, emerging legislation, growing community and increasing professionalisation of Responsible AI.

This year our plan is to continue to scale the event according to the needs of the community, and we intend to do that by adding a new stream to the event.

The first stream will focus on AI Governance and Risk Management, the second new stream will be more technical, and engineering focused – looking at how to integrate Responsible AI at the point of development.