The code of practice is voluntary and aims to help companies establish internal mechanisms for implementing the AI law.
The regulation, which is going into force on a staggered timetable, establishes curbs on AI in general purpose and high-risk fields and restricts some applications.
Rules impacting “general purpose AI” like OpenAI’s ChatGPT or Anthropic’s Claude will apply from next month.
Breaching the AI Act can carry a fine of as much as 7% of a company’s annual sales or 3% for the companies developing advanced AI models.
The code, which still needs a final sign off from the European Commission and EU member states, has been controversial.
It triggered a backlash from some technology companies, including Meta Platforms Inc. and Alphabet Inc. They complained that earlier drafts went beyond the bounds of the AI Act and created a new set of onerous rules.
This month, European companies including ASML Holding NV, Airbus SE and Mistral AI also asked the commission to suspend the AI Act’s implementation for two years in an open letter this month calling for a more “innovation-friendly regulatory approach”.
The commission, which missed an initial May deadline to publish the code of practice, has so far declined to postpone the implementation.
The code was drafted under the guidance of officials from the commission, the EU’s executive branch, which organised working groups composed of representatives from AI labs, technology companies, academia, and digital rights organisations.
The commission will only start directly overseeing the AI Act’s application in August 2026.
Enforcement will be in the hands of national courts, which may have less specific technical expertise, until then.
Signing the code of practice will give companies “increased legal certainty”, the commission has said.