What is in the EU AI Act?

With the European Union’s Artificial Intelligence (AI) Act on the horizon, now is the time to learn more about it.

Add bookmark

Almost a year after it was first drafted, the European Union’s Artificial Intelligence (AI) act could end up being one of the most influential legal frameworks the world has ever seen. Although it has yet to be adopted, the act aims to bring some much needed oversight to the overreach potentially exercised by AIs and the deeply marginalizing errors they can inflict on individuals. 

This is widely known as AI’s black box problem, and essentially comes down to when even the creators of these models don’t understand the reasons for the decisions that are made by them. This results in biases that can infiltrate our modern lives leading to wrongful arrests, prejudiced job applications and even lead to financial ruin.

So there is clear precedent for some form of regulation in this space, which is where the EU’s AI Act comes into play. The bill is currently being amended by members of the European Parliament and EU countries, but here is a quick guide to what is in store for AI models in the future (in Europe at least).

The first draft of the bill is lofty in nature, requiring extra checks on “high risk” AI which have the potential to harm people en masse. In this, it addresses those which make decisions such as criminal judgements, exam gradings and potentially biased recruiting practices. The bill also takes aim at China’s social credit system, banning such AI models as labeling them “unacceptable”.

The breadth of the act is truly impressive. Not only does it require people to be notified when they encounter deepfakes, biometric recognition systems, or AI applications that claim to be able to read their emotion, the act also takes umbrage with facial recognition, particularly with law enforcement agencies’ use of facial recognition in public places. 

On top of this, potential amends to the bill may result in a complete ban of predictive policing systems. These AI models operate by analyzing large data sets in order to preemptively deploy police to crime hotspots (think pre-crimes in Minority Report). These systems are some of the most controversial and beholden to the black box issue, often as they lack transparency in how they operate with fears that biases can become automated through them. 

The EU currently argues that this act will only affect 5-10% of AI businesses, but legal experts ascertain that the vague nature of what makes an AI system “high-risk” means that the act may allow for certain systems to be retrained or destroyed entirely.

In the draft Act, high-risk systems are explicitly defined as: Biometric identification and categorisation of natural persons; Management and operation of critical infrastructure; Education and vocational training; Employment, workers management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; Administration of justice and democratic processes.

The problem of course being that certain AIs will be more middle-ground than most. Despite this, major revisions to the proposal seem unlikely at this relatively late stage of the EU’s co-legislative process.

There are some who worry that this act will slow down innovation – a constant issue in a Europe that struggles to keep up with both China and the US when it comes to technology. Even so, while many of these ideas are actually impossible to implement at present (it requires that data sets be entirely free of errors and that model makers “fully understand” how their AI systems work), the example they are setting for the wider West is perhaps wherein the importance of the act lies.

In any case, it will be at least another year before a final text is decided upon, and a couple more years before businesses will have to comply.


RECOMMENDED