You’re in the boardroom. The quarterly results look good. But then legal speaks up:
“We need to talk about the AI tools in operations. The EU’s new regulation kicks in next year—and right now, we don’t even know what we’re using.”
Sound familiar?
The EU AI Act isn’t a future problem. It’s the next GDPR – only this time, it’s not about privacy, it’s about control.
Control over what AI systems you build, buy, or embed. Control over who’s responsible when something goes wrong. And control over how you document, monitor, and prove you’re playing by the rules.
Just like GDPR forced companies to rethink data, this law will force you to rethink how AI decisions happen – across IT, legal, ops, product, and the board.
But here’s the good news: if you act early, you won’t just avoid fines. You’ll gain operational clarity, risk alignment, and even market trust.
What the Law Actually Applies To
A German medtech startup launches an AI feature for diagnostics. It’s just part of the product–until it’s flagged as high-risk under the EU AI Act.
Now the founder is expected to show risk controls, logging, oversight, and compliance documentation… fast.
That’s the reality. The AI Act isn’t about software. It’s about responsibility–what your system does and what role you play in using or distributing it.
⚖️ Four Risk Levels
Unacceptable
Fully banned. Covers social scoring, public facial recognition, emotion detection at work, and more.
High-Risk
Allowed with strict controls. Think medical diagnostics, hiring tools, financial scoring.
Transparency Risk
You can use it—but users must be told they’re interacting with AI. Applies to chatbots, synthetic media, etc.
Minimal Risk
No obligations. Applies to low-impact systems like spam filters or AI-enabled grammar checkers.
🧩 Four Roles, Four Sets of Duties
Provider
You built or sell the system. You’re responsible for compliance, documentation, risk controls, and post-market monitoring.
Example: A French firm offering an AI-based hiring tool across Europe.
Deployer
You use the system in your own operations. You need to ensure oversight, transparency, and internal controls.
Example: A retailer using an AI chatbot to automate customer service.
Importer
You bring non-EU AI into the EU. You must check the provider’s compliance before market entry.
Example: A Dutch firm importing a U.S. analytics model for local hospitals.
Distributor
You offer AI tools to others. If you discover a compliance issue, you must act.
Example: An IT vendor selling third-party AI solutions to clients.
Your first move?
Map every AI tool your company touches. Tag the role you play in each–and identify the risk level.
That’s how compliance begins.
Your Timeline in Plain Terms
Think of the EU AI Act timeline as a project schedule, not just a legal countdown. The earlier you move, the easier it is to control the outcome. Delay too long, and you’ll be reacting under pressure.
Here’s what matters–and when:
📅 August 2024
The law is in force. You’re already on the clock, even if some provisions haven’t kicked in yet.
📅 February 2025
Initial general provisions start applying. This is the right moment to assess how exposed you are to GPAI–especially if you rely on large external models.
📅 August 2025
GPAI-specific rules take effect. That means transparency obligations, copyright safeguards, and mandatory information sharing with downstream users.
If you use tools like OpenAI, Claude, or open-source LLMs–this applies to you.
📅 August 2026
Most core requirements become enforceable, especially for high-risk systems. You’ll need governance in place: risk classification, documentation, human oversight, logging, and post-market monitoring.
📅 August 2027
Full rollout is expected, including new rules for systemic-risk GPAI and any lagging standards. By this point, regulators expect you to be running a mature AI governance model–readiness won’t be optional.
Where to start now
You don’t need to do everything at once–but you do need to start. Focus on:
- Building an AI inventory
- Identifying high-risk systems
- Assigning roles and control ownership
Every quarter you act early buys you time, control, and credibility. Wait too long, and you’ll be playing catch-up when enforcement begins.
What’s Banned vs What’s Just Hard
A startup in Spain installs facial recognition in public areas–without realizing it’s now banned under Article 5 of the EU AI Act. By the time regulators intervene, it’s too late.
The fines come fast, and so does the reputational damage.
Meanwhile, a logistics firm classifies its route optimization AI as high-risk. It’s complex, yes–but with documented oversight, clean data, and internal logging, it passes pre-assessment with no major issues.
This is the line the Act draws.
❌ Some AI use-cases are prohibited–no exceptions
- Social scoring systems by governments
- Facial recognition scraped from the web
- Emotion detection in schools or workplaces
- Real-time biometric ID in public, except under very narrow conditions
- Predictive policing based on profiling
These systems aren’t risky. They’re illegal. And enforcement starts early.
In contrast, high-risk systems are allowed–but regulated. Tools like CV screening software, educational testing platforms, medical diagnostics, and credit-scoring AI all fall under this category.
These systems must meet strict requirements, including:
- Documented risk management
- Human-in-the-loop oversight
- High-quality training data
- Logging and traceability
- Clear technical documentation
- Ongoing post-market monitoring
One category requires a complete stop. The other requires operational maturity.
You don’t need perfection to comply with the high-risk rules–you need ownership, structure, and process. Start early, move steadily, and compliance becomes manageable.
GPAI Without the Hype: What You Must Do Now
General-purpose AI isn’t just a Big Tech issue. If your teams use tools like ChatGPT, Claude, or Gemini, you’re in scope–and now on the hook.
Under the EU AI Act, GPAI providers must publish training data summaries, maintain technical documentation, follow copyright rules, and pass key details downstream.
If their models pose “systemic risk,” extra safeguards apply.
But if you’re using these tools inside your business, you’re not off the hook. You’re expected to:
- Confirm vendor compliance
- Request documentation–model cards, data summaries, intended use
- Log internal use cases
- Apply human oversight where needed
The GPAI Code of Practice (July 2025) gives you a clear head start. Use it to guide vendor asks and build your internal playbook—even if you’re not high-risk.
Start simple: list the tools, providers, and documents received. That’s your baseline.
When speed matters, firms often bring in an interim AI compliance lead to build this foundation and get everyone aligned–fast.
Build Your AI Inventory and Classify Risk
Picture this: you’re in a boardroom and someone asks,
“How many AI systems are we running—and what’s their risk level?” If your answer starts with a pause, you’re not alone. But you’re also not ready.
Here’s how to take control–quickly and clearly.
Start by creating a simple table with five columns:
System | Owner | Risk Level | Role | Documentation
I. Begin with what’s obvious: chatbots, pricing engines, CV screeners, scoring models.
II. For each, tag the risk level: banned, high-risk, transparency-only, or minimal.
III. Identify your role: Did you build it (provider)? Use it (deployer)? Source it externally (importer/distributor)?
IV. Collect what you already have: contracts, model cards, vendor docs, internal assessments.
In just a few working sessions, the fog starts to clear.
Before: No inventory, vague guesses, disconnected teams.
After: 15 tools mapped, 3 high-risk systems flagged, clear owners named, documentation gaps visible.
This is how readiness begins—not with theory, but with a shared view of what’s real and what needs work.
Role Clarity and the Minimum Controls That Work
Running AI in your business without defined roles is like managing a factory floor without shift leads. It might function on good days – but it breaks when things go wrong.
Treat this like your AI task force. Each team has a clear role to play, and while the setup doesn’t need to be heavy, it does need to be intentional.
The Board defines the risk appetite and checks progress quarterly. Their job is to ask: “Is this our GDPR moment – and are we resourcing it seriously?”
The CIO or Chief Data Officer owns the AI inventory, vendor oversight, and compliance workflows. They know which models are in use, and who’s accountable for each.
Product and operations keep human safeguards in place – ensuring fallback processes and real-world testing are part of the workflow. Their question: “What happens when the AI goes sideways at 2 a.m.?”
Data science and ML teams manage technical documentation, model drift detection, and version control. They stay alert to one key concern: “Are we still running the model we approved last month?”
Legal and compliance drive policy, assessments, and audit readiness. Their role is simple but critical: “Can we show documentation – before we’re asked for it?”
What’s the smallest stack of controls that actually works?
- One shared AI inventory
- Defined human oversight for high-risk systems
- Active logging
- A technical file in motion
- A standard vendor checklist
Keep it light. Keep it live. That’s what survives an audit.
When to Call In Temporary Leadership
When deadlines accelerate or scope balloons beyond the team’s capacity, companies bring in a temporary AI compliance lead – someone who builds the inventory, aligns vendors, and drives documentation forward without pulling product teams off-track.
CE Interim specializes in placing these operators with 30- to 90-day mandates – focused, embedded, and ready to execute.
Fines Are Not the Point – Avoid Them by Design
Yes, the fines are steep: up to €35 million or 7% of global turnover for the worst violations. But that’s not what should keep you up at night.
The real risk? Deals delayed because of compliance gaps. Products blocked at launch. Investor calls derailed by governance questions.
Fixing those problems later costs more – in money, in time, and in reputation.
The cheapest path is to design compliance in from the start, not bolt it on after the fact.
30-Day Readiness Plan
You’ve made it this far – now here’s how to move, fast and smart.
Week 1: Map the landscape
Build your AI inventory. Assign owners. Flag anything that touches sensitive data, customers, or compliance areas.
Checkpoint: 80% of tools mapped, red flags surfaced.
Week 2: Engage your vendors
Request documentation — model cards, training summaries, compliance declarations. Use the GPAI Code of Practice as your guide.
Checkpoint: Docs collected, workflows drafted.
Week 3: Pressure test
Run a simulation on a high-risk use case. Test human-in-the-loop procedures and review how fallbacks and logs are triggered.
Checkpoint: Gaps identified, fixes planned.
Week 4: Align leadership
Brief the board on risks, gaps, and budget asks. Present a clear 90-day roadmap tied to product and compliance objectives.
Checkpoint: Plan approved, momentum secured.
When teams are stretched thin, an interim program lead can run this entire sprint, hand over cleanly, and exit – leaving structure behind.
Conclusion – From Risk to Readiness
This isn’t just another regulation. It’s a reset button for how companies handle AI – across tech, product, compliance, and leadership.
The AI Act doesn’t ask for perfection. It asks for ownership. For visibility. For readiness.
Start by mapping what you already use. Define who’s responsible. Show how the risk is being controlled.
Start small. Keep it real. Build momentum from there.
Start this week – and stay ahead.