← All articles

AI Act: what the first general-purpose model compliance looks like

25 November 20242 min read

The first rules on foundation models kick in May 2025. What providers, integrators and end-customers must do.

The AI Act entered into force on August 1, 2024. Rules for "general-purpose AI" (GPAI) models — foundation models like Claude, GPT, Gemini, Llama — apply from August 2, 2025. Not far away. Here is the concrete picture.

All GPAIs

Technical documentation available to authorities (training data, evaluation, capabilities, limits), copyright policy on training data, public summary of content used for training. Nothing revolutionary, but for big-model trainers it is real work.

"Systemic impact" GPAIs

Models above 10²⁵ FLOPS training (≈ frontier like GPT-4, Claude 3, Gemini 1.5). Extra duties:

  • Model evaluation with standardised red-teaming.
  • Systemic risk tracking and mitigation.
  • Adequate cybersecurity.
  • Significant incident reporting to the AI Office.

What changes for integrators

If you build an app using Claude or GPT, you are a downstream deployer, not a provider. Your compliance is simpler — unless you fine-tune or materially modify the model. You still must:

  • User-facing transparency: they must know they are talking to an AI.
  • Mark generated content (deepfakes, synthetic images).
  • Log interactions when the system falls in high-risk categories.

Code of practice

The AI Office is finalising the Code of Practice for GPAI: a "soft" compliance route that, if followed, counts as presumption of conformity. It is in consultation: providers that want to do it clean are participating.

What to do today

For Italian SMEs using integrated AI: start a risk analysis, understand which systems fall where, and put transparency processes in place before the deadline. Do not wait for May 2025.