EU publishes General-Purpose AI Code of Practice

Three-chapter code addresses transparency, copyright and safety obligations under the EU AI Act

Add bookmark
Listen to this content

Audio conversion provided by OpenAI

Michael Hill
Michael Hill
07/11/2025

EU AI code of practice

The European Commission officially received the finalized General-Purpose AI Code of Practice. The comprehensive framework, developed through a multi-stakeholder process involving nearly 1,000 participants, establishes voluntary guidelines for artificial intelligence (AI) model providers ahead of mandatory compliance requirements taking effect August 2 2025.

The rules become enforceable by the AI Office of the Commission one year later as regards new models and two years later as regards existing models.

The three-chapter code addresses Transparency, Copyright and Safety obligations under the EU AI Act. These include comprehensive documentation requirements, rights reservation protocols and systemic risk management measures.

In the following weeks, EU Member States and the Commission will assess its adequacy. Additionally, the code will be complemented by Commission guidelines on key concepts related to general-purpose AI models.

Become a member of the AI, Data & Analytics Network for free and gain exclusive access to premium content including news, reports, videos, and webinars from industry experts. Connect with a global community of senior AI and data leaders through networking opportunities and receive invitations to free online events and weekly newsletters. Join today to enhance your knowledge and expand your professional network.

Code supports compliance with the EU AI Act

The General-Purpose AI Code of Practice framework applies to providers placing general-purpose AI models on the EU market regardless of provider location. It places particular focus on models with systemic risk capabilities and those integrated into downstream AI systems.

The Chapters on Transparency and Copyright offer all providers of general-purpose AI models a way to demonstrate compliance with their obligations under Article 53 AI Act.

The chapter on Safety and Security is only relevant to the small number of providers of the most advanced models, those that are subject to the AI Act’s obligations for providers of general-purpose AI models with systemic risk under Article 55 AI Act.


Read What is ethical AI?


Providers of general-purpose AI models who voluntarily sign the code will be able to demonstrate compliance with the relevant AI Act obligations. In doing so, signatories to the code will benefit from a reduced administrative burden and increased legal certainty compared to providers that prove compliance in other ways.

“Today’s publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent,” said Henna Virkkunen, EVP for tech sovereignty, security and democracy. “Co-designed by AI stakeholders, the code is aligned with their needs. Therefore, I invite all general-purpose AI model providers to adhere to the code. Doing so will secure them a clear, collaborative route to compliance with the EU’s AI Act.”


Register for PEX Network’s All Access: AI in Business Transformation 2025!


Is the General-Purpose AI Code of Practice fairly balanced?

Commenting on the General-Purpose AI Code of Practice Randolph Barr, CISO at Cequence Security, said: “The oversight from the EU Commission is generally regulatory and policy-focused – not commercial – which is encouraging. That said, it does raise an important question: why was input limited to just a handful of large companies, without broader opportunity for community feedback?”

It appears smaller companies were largely excluded from the drafting process and the process lacked formal transparency or open consultation, he claimed. “That limits the ability to scrutinize the intent, content and balance of the code.”

The code includes solid references to model evaluation, adversarial testing, training data transparency and risk disclosures but smaller, innovative companies often pioneer new safety techniques, ethical designs and inclusive governance models, Barr argued.

“Excluding them risks shaping standards around incumbent risk appetites, prioritizing defensibility over agility or fairness. It also potentially slows down innovation and sidelines more open, decentralized or community-driven approaches to AI. Moving forward, I’d really like to see public comment periods, inclusion of startups and academic researchers and greater transparency around who is shaping these guidelines and what expertise they bring.”

Pioneering the Next Era of Intelligence

Join All Access: AI, Data & Analytics 2025. A free webinar series streaming live November 4, 2025, designed to guide you in integrating AI effectively.

Learn from industry experts on identifying opportunities, managing risks, and making strategic AI and data investments. Key themes include Decision Intelligence, AI and Data Governance, and Scaling GenAI and ML Operations.

Learn More


RECOMMENDED