The EU AI Act and What It Means for Your AI Agents
A plain-language reading of the EU AI Act for teams shipping customer-facing agents. What counts as "high-risk," what documentation you owe, and what the GPAI rules actually require.
This post is an engineering-grade summary, not legal advice. Work with a qualified attorney before making compliance decisions for your company. See the official text at artificialintelligenceact.eu.
The EU AI Act has a reputation for being vague. That's partly because the regulation is genuinely wide in scope, and partly because most summaries online are written by people trying to sell you a compliance service. The actual rules, if you read them carefully, fall into a short list of concrete requirements. Here is what we think operators shipping AI agents in the EU today actually need to know.
Four risk tiers, and almost nobody is "high-risk"
The Act classifies AI systems into four tiers:
- Prohibited — social scoring, real-time remote biometric identification in public spaces (with narrow exceptions), and a few others. If you are doing one of these things, you have a legal problem, not a compliance one.
- High-risk — a specific enumerated list in Annex III: employment decisions, access to education, credit scoring, critical infrastructure operations, law enforcement. Most SaaS AI agents (customer support, sales outreach, marketing copy) are NOT on this list.
- Limited-risk / transparency obligations — systems that generate content, impersonate humans, or interact with people directly. Includes most chatbots. The obligation is disclosure — "you are talking to an AI" — not audit.
- Minimal-risk — everything else. Voluntary codes of conduct recommended, no mandatory compliance burden.
If your agent isn't in tier 1 or tier 2, the obligations are light. If it IS in tier 2, the obligations are substantial — see below.
What "high-risk" obligations actually look like
For a high-risk system (Annex III), the Act requires:
- Risk management system — documented processes for identifying, mitigating, and monitoring risks from the AI's operation.
- Data governance — training and validation data traceable, representative, and relevant. Datasets documented.
- Technical documentation — how the system works, what it was trained on, what its known limitations are. This has to exist before you ship.
- Record-keeping — automatic logs of system operation retained for a period appropriate to the use case. This is the bit most engineering teams underweight. The Act explicitly expects traceable records.
- Transparency + human oversight — users must be informed of the system's capabilities and limitations; a human must be able to intervene, override, or stop the system.
- Accuracy, robustness, cybersecurity — the system must perform consistently and resist adversarial inputs.
- Post-market monitoring — ongoing collection and analysis of operational data to detect drift or harm.
The upshot: if you ship a high-risk agent, you owe the regulator a standing audit trail, not a snapshot. That is the central reason we built Precipiq around a tamper-evident ledger instead of a lossy event stream.
GPAI — the other provisions
Separate from the risk tiers, the Act has specific rules for general-purpose AI models (GPAI). If you train or distribute a foundation model that hits a compute threshold (specifically: 10^25 FLOPs of training compute, per Article 51), you owe:
- A technical documentation packet (model card, training data summary, intended use).
- A published summary of the training data.
- Compliance with EU copyright law (including the TDM opt-out in the Copyright Directive).
- For "systemic-risk" models, additional obligations: adversarial evaluation, incident reporting, cybersecurity.
Most operators shipping application-layer agents on top of existing models (GPT-4, Claude, Gemini) aren't GPAI providers — the upstream model provider is. But you inherit disclosure obligations downstream, so knowing which model you use and being able to prove it matters.
What operators should do this quarter
- Classify your system. Read Annex III carefully. Most application-layer agents aren't high-risk, but edge cases (recruiting, credit, access control) are easy to trip into.
- Start logging. Even for limited-risk systems, a durable decision log costs nothing to keep and saves you enormous pain if scope changes later.
- Write the transparency disclosure now. "You are interacting with an AI assistant" somewhere visible in the UX covers the limited-risk obligation. Don't wait for the audit.
- Document your model dependencies. A simple table: model name, provider, version pin, date of last upgrade. Auditors want this; you will also want it for incident response.
Where Precipiq fits
The EU AI Act's record-keeping obligation for high-risk systems is precisely what a tamper-evident decision ledger satisfies. For limited-risk systems (the majority of operator AI), the same ledger is cheap insurance if your scope expands — moving from a limited to a high-risk tier later is dramatically harder if you have no retroactive evidence of how the system behaved before the scope change.
We're building the infrastructure operators will need when regulators start asking real questions. The time to start capturing decisions is before you need them.