EU AI Act: what your board needs to know before it is too late.
The compliance conversation most European organizations are having six months too late, and what board-ready AI governance actually looks like.
The EU AI Act became binding on 1 August 2024. Most boards we sit with treat it like GDPR in 2017: an interesting legal development, on the agenda for next quarter. That timing miscalculation is about to cost a lot of organisations a lot of money.
The hard cutoff for high-risk AI systems is 2 August 2026. From this date, any organisation operating a high-risk AI system inside the EU without a documented conformity assessment exposes itself to fines of up to €15 million or 3 percent of global annual turnover, whichever is higher. For prohibited practices, the ceiling is €35 million or 7 percent.
Three months out, most boards still cannot answer the four questions that decide whether their organisation is exposed.
01 The four board questions
Every board sponsoring AI work in Europe should be able to answer these in a sentence each, with a named owner behind each answer. If you cannot, your governance is not where it needs to be.
Question 1 · Are we in scope, and at which risk tier?
The Act classifies AI systems into four tiers: unacceptable risk (banned outright), high risk (most regulated, includes most enterprise use cases in HR, credit scoring, critical infrastructure, education, biometric ID), limited risk (transparency obligations only), and minimal risk (no obligation). Your board needs an inventory of every AI system in production with its tier classification, signed off by legal and the business owner.
Question 2 · Who owns the conformity process?
Conformity assessment is not a legal department task. It cuts across data, risk, IT, the business unit, and external auditors. Boards we see succeed on this have appointed a single accountable executive, often the chief risk officer or a dedicated AI governance lead, with a small permanent team. Boards that delegate to "the AI working group" are six months behind.
Question 3 · What evidence will we present, and to whom?
Article 9 to Article 15 of the Act spell out what every high-risk system must document: a risk management system, data governance, technical documentation, record-keeping, transparency information, human oversight, and accuracy/robustness/cybersecurity measures. The market surveillance authority of the member state where you operate can request all of this. The evidence must exist before they ask.
Question 4 · How do we know our supply chain complies?
If you procure an AI tool from a vendor, your obligations do not stop at the licence agreement. You remain accountable for how the system performs in your context. That means contractual provisions on transparency, audit rights, model cards, and incident reporting. Most procurement teams have not updated their AI vendor templates yet. Your board should ask when they will.
02 What "board-ready AI governance" actually looks like
Three artefacts. They sit on a shelf reachable in five minutes. They are updated quarterly. Nothing more, nothing less.
Artefact 1 · The AI inventory
One spreadsheet. One row per AI system in use. Columns: name, business owner, tier classification, date of conformity assessment, next review, residual risks logged, vendor or in-house, dependent regulated framework (banking, health, etc.). The board reviews this in full once a year and the high-risk subset every quarter.
Artefact 2 · The conformity dossier per high-risk system
One folder per high-risk system. Contains the risk management plan, the data governance brief, the technical documentation, the human oversight design, the accuracy and cybersecurity test results, the incident log. This is the artefact a market surveillance authority will request. It must be writeable in one place, current within 90 days, and reproducible without the consultancy that helped you build it.
Artefact 3 · The escalation map
One page. When a serious incident occurs (the Act mandates 15-day notification for serious incidents involving high-risk systems), who calls whom in what order. Authority, vendor, customer, regulator, board. Practised in a tabletop exercise at least once a year. Not invented at 3 a.m.
EU AI Act compliance is not a project with a delivery date. It is an operating capability. The organisations that survive 2026 will be the ones whose AI governance scaled with their AI deployment, not the ones who treated compliance as a sprint.
03 The timeline most boards are getting wrong
The Act applied in phases. Several thresholds have already passed without most organisations noticing.
| Date | What activated |
|---|---|
| 1 Aug 2024 | Act entered into force. Twelve-month countdown started. |
| 2 Feb 2025 | Prohibited practices banned. Social scoring, certain biometric uses, manipulative AI. Past. |
| 2 Aug 2025 | General-purpose AI rules + governance + penalties applied. Past. |
| 2 Aug 2026 | High-risk system rules apply. Twelve weeks from now. |
| 2 Aug 2027 | High-risk AI integrated in regulated products fully applicable. |
If your AI system is high-risk and your conformity dossier does not exist on 2 August 2026, you are operating outside the Act. Whether the market surveillance authority of your member state knocks on your door in October 2026 or in March 2027 is a roll of the dice. The exposure is the same.
04 What to do this quarter
If you recognise the gap, here is the protocol that fits the time you have left.
Week 1 to 2. Inventory. Every AI system in production or in late-stage pilot. Tag tier per the Act's Annex III definitions. Be conservative: when in doubt, treat as high-risk and reclassify down later if legal confirms.
Week 3 to 6. Triage. For every high-risk system, assess the gap against the seven Article 9-to-15 requirements. Output: a one-page gap matrix per system, scored red, amber, or green.
Week 7 to 10. Close the reds. The Act does not demand perfection on day one. It demands a documented, defensible, currently operating compliance posture. A high-risk system with a 70-percent dossier and a credible plan to close the rest beats a 100-percent dossier that does not exist yet.
Week 11 to 12. Establish the operating cadence. Quarterly board review of the inventory. Annual review of the dossiers. Incident escalation tabletop exercise on the calendar. Vendor contract renegotiation list locked.
Twelve weeks. Doable. We have run the protocol with three banks and one industrial group in the last six months. The boards who started in February 2026 are clean. The boards who started in May 2026 are working hard but will make the deadline. The boards who start in July 2026 will not.
05 The leadership question
Boards keep telling me the AI Act is "a legal matter". It is not. Legal frames the obligation. The board owns the consequence.
If the market surveillance authority issues a fine of 3 percent of global turnover, it does not land on legal. It lands on the P&L. If the regulator suspends your high-risk system pending compliance, it does not slow down the legal team. It stops the business unit running on it. And if a serious incident reaches the press, the question to the chief executive is not "did legal advise correctly". It is "did you know this system was operating without conformity".
Treat AI governance as a legal expense and you get a legal product. Treat it as an operating capability and you get an organisation that can ship AI at scale, defensibly, in Europe, for the next decade.
The choice is on the agenda of your next board meeting. Make it explicitly.