Skip to content
← Skygena Signal
opinion 4 min read

The EU AI Act enters its second year — what mid-size operators should do this quarter

As the EU AI Act moves from headline to enforcement, the operational implications for mid-size operators are sharper than the law firms suggested. A practical to-do list.

by Skygena Editorial

A year of seminars, white papers and panel sessions have left most mid-size operators with a vague sense that “we should probably do something about the EU AI Act”. Vague is no longer enough. As enforcement moves from theoretical to practical in 2026, the operational implications are sharper than the legal industry has made them sound.

This piece is not legal advice. It is the practical view from a team that has helped half a dozen organisations build their AI operating model in the last twelve months. If you are responsible for AI in a mid-size enterprise, here is what we would do this quarter.

1. Build the inventory you do not have

Almost every client we have walked into in the last six months believed they had three or four AI systems. After two weeks of interviews and inbox archaeology, the real number is usually between fifteen and forty. The “shadow AI” problem is real and growing — every department has subscribed to something, every team has an internal copilot.

The first concrete deliverable for 2026 is a written inventory of every AI system the organisation runs or uses, classified by risk under the AI Act categories. This is not a regulatory exercise. It is an operating one. Without the inventory, you cannot make the next four decisions.

2. Distinguish high-risk from the rest — honestly

The Act’s risk categories are not symmetric. Most enterprise AI systems fall into the limited or minimal risk buckets, where the obligations are modest. A small handful — credit scoring, recruitment, anything biometric, anything that materially affects a person’s access to a service — fall into high-risk, where the obligations are substantial.

The mistake we see most often: treating everything as high-risk. The cost of that mistake is process fatigue and a governance function that nobody listens to. Be honest about which systems are actually in scope. Then commit real resources to those systems and lighten the touch on the rest.

3. The “controls everyone hates” pattern

The reason most AI governance frameworks fail is that engineers hate them and stop following them within six weeks. The reason they hate them is usually that the framework was written by people who do not build software.

Pragmatic controls follow three principles:

  • They live where engineers already are. Pull request templates, CI gates, dashboards in the same observability stack as everything else. Not a separate portal nobody opens.
  • They produce evidence automatically. The control is the evidence. No “fill in this spreadsheet at quarter-end”.
  • They are owned by an engineer who has shipped. Not by a policy team that has not.

Build your controls this way and your engineers will tolerate them. Build them any other way and you have a paper exercise.

4. The DPIA is the hinge

Whatever else you do this quarter, write the data protection impact assessment for any system that processes personal data at scale. The DPIA is the hinge between the legal text and the operating reality. It forces you to articulate what data the system sees, why, with what safeguards and what residual risks you are accepting. Once that is on paper, ninety percent of the governance arguments dissolve.

5. Pick a board sponsor

The single biggest predictor of whether AI governance becomes real or stays paper is whether someone on the executive committee owns it personally. Not “the CIO is responsible” — that does not move budgets. We mean a named exec, with a recurring line item on the board agenda, and a quarterly metric they have to report.

If you cannot get that, your governance work will be performative.

What we would not do

  • We would not buy an “AI governance platform” yet. The market is overpriced, the products are immature, and what most organisations need is closer to documentation discipline than to a SaaS product.

  • We would not outsource governance to a Big Four firm and call it done. The reports are usually thoughtful, expensive, and unactioned. Build internal capability instead.

  • We would not pretend the AI Act is going to slow down. It is going to accelerate. Operators that get ahead this quarter will spend 2026 shipping. Operators that postpone will spend 2026 catching up.

If we can help you stand up the operating side of any of this — inventory, controls, DPIA, dashboard — write to us at [email protected].

Thinking about AI in your business?

Skygena is a boutique European AI studio engineering autonomous agents and LLM products. If you're wrestling with where to start — or where to stop — we can help.

Book a 30-minute call