1. The gap

On 8 October 2025 the European Commission published the Apply AI Strategy with a “buy European” sectoral pillar. On 9 April 2026 OpenAI paused its UK Stargate site. Two days earlier Mistral, the Commission’s preferred frontier vendor, had raised a fresh round to build datacentres in France and Sweden, while the German enterprise AI champion announced its merger with a Canadian foundation-model company that ships from Toronto.

The European AI policy stack has assumed for half a decade that European industry owns the layer the regulation governs. As of April 2026, it does not.

This piece argues that the European Union is the world’s most rigorous AI regulator and the world’s least credible deployer of regulated AI, and that the gap is not closed by writing more rules. It is closed by underwriting the layer of governance that turns a model into a system. The piece refuses to treat the AI Act as a substitute for that layer. It refuses to treat sovereign-compute funding as a substitute for it either. The continent is being asked to govern an architecture it does not control. The fix is not a longer regulation. It is a working specification for the boundary between a model and the production system it runs inside.

2. The rulebook

The AI Act entered into force on 1 August 2024 and arrives in stages. Article 5 prohibitions and AI literacy obligations have applied since 2 February 2025. General-purpose AI obligations apply since 2 August 2025. High-risk system rules apply from 2 August 2026. Full applicability is reached on 2 August 2027. Penalties are graduated: Article 5 prohibition breaches reach 35 million euros or 7 percent of global turnover; high-risk breaches reach 15 million or 3 percent; misleading information reaches 7.5 million or 1 percent. General-purpose provider fines under Article 101 reach 15 million or 3 percent, but the Commission’s enforcement powers do not start until 2 August 2026.

The companion regulatory text was the AI Liability Directive. The Commission withdrew it on 11 February 2025 from its work programme, citing “no foreseeable agreement” — a quiet acknowledgement that the cross-border tort regime that was supposed to translate AI harms into recoverable damages was politically unreachable. National tort law now sets the floor. That floor differs in 27 directions.

Standardisation work at CEN-CENELEC JTC 21 entered an accelerated procedure in October 2025. The first AI harmonised standard, prEN 18286, covering AI quality management, reached public enquiry on 30 October 2025. Core deliverables are calendared for the fourth quarter of 2026, days before the high-risk system deadline lands. The publication path was compressed to make the dates work.

The rulebook is real. The standards beneath the rulebook are still being drafted. The systems the rulebook governs have not yet been built by anyone domiciled in the Union.

3. The compute layer

EuroHPC has built the visible response. Nineteen AI Factories now operate across thirteen Member States, with thirteen Antennas in seven additional Member States and six partner countries including the United Kingdom. Above the AI Factories sit the AI Gigafactories: five proposed sites at one hundred thousand H100-equivalents each, financed by the 20-billion-euro InvestAI fund covering roughly one third of capex per site.

The numbers are large in European terms and small in American ones. A single hyperscaler quarter in 2026 absorbs more capital than the entire AI Gigafactory programme. Stargate Norway is the largest American-anchored compute build on European soil — Nscale plus Aker plus OpenAI, targeting one hundred thousand Nvidia GPUs by year-end 2026 across two-hundred-and-thirty to two-hundred-and-ninety megawatts at Kvandal in Narvik. The Norwegian site is not a European programme. It is an American hyperscaler’s capacity, located in Europe to chase clean power.

Atomico and Sifted’s State of European Tech 2025 report puts European venture capital at roughly 44 billion dollars for 2025, with AI capturing 31 percent. The cumulative European AI venture stack stands at 14 billion against an American comparator of 146 billion. The ratio is ten to one. The deployment gap is wider than the funding gap, because European enterprise AI procurement has been slower than American.

4. Hollywood, Berlin, the lawsuit machine

The European creative-industry response to AI training data has been more coordinated and more institutional than the American one.

GEMA, the German collecting society, filed against Suno in Munich on 21 January 2025 and against OpenAI in November 2024. The Munich oral hearing took place on 9 March 2026; a ruling is calendared for June. SACEM and CISAC have aligned. A joint Goldmedia study commissioned by GEMA and SACEM puts cumulative German plus French rightsholder losses by 2028 at 2.7 billion euros. The German society of music authors is acting as the test plaintiff for a continent whose copyright regimes do not federate.

The Anthropic settlement in Bartz in September 2025 — 1.5 billion dollars for the pirated training corpus — was paid in California and read in Berlin. European labels and publishers have priced it. The implicit position is that licensing is the corrective regime and that the figure is now negotiable. That position is sound for music. It is less sound when transferred to enterprise AI, which is the unstated transfer the AI Act partly performs by reusing the language of intellectual-property risk to describe the agent runtime.

The same instrument that makes a music-rights settlement enforceable in Berlin does not make a customer-service agent’s tool call enforceable in Frankfurt. The two are different problems. The instrument confuses them.

5. The conflation, European edition

A copyright dispute over training data is a question of upstream consent. An enterprise AI deployment problem is a question of downstream authority. Brussels has built half of one and almost none of the other, then mixed the languages.

The high-risk regime in the AI Act is heavily borrowed from product-safety law. It treats an AI system as a product placed on the market by a provider, with a deployer downstream. The vocabulary works for medical devices and worker-management systems where there is a fixed boundary, a fixed risk class, a CE marking. The vocabulary fails when the system is a fleet of autonomous agents that hold delegated authority over a customer database, an internal calendar, an outbound payment channel. The product-safety frame does not produce the receipt that an auditor needs to reconstruct who authorised what. It produces a conformity declaration.

The conformity declaration is necessary. It is not sufficient. The instrument the conformity declaration depends on, for an autonomous agent, is a runtime that produces signed evidence of every tool call, every refusal, every escalation. That instrument is not a product. It is a substrate. The AI Act does not specify it. CEN-CENELEC JTC 21 will not publish a draft for it before the high-risk deadline. The Member States will fill the gap with national orders that diverge.

When the question is the wrong question, the answer is decoration in twenty-seven languages.

6. Surface fears versus operational reality

The Capgemini Generative AI in Organizations 2025 study reads as the cleanest signal on European enterprise reality. Generative AI adoption in organisations has risen from six percent in 2023 to thirty percent in 2025. Ninety-three percent of surveyed organisations are exploring or enabling. Only two percent have deployed AI agents at scale. Twelve percent have agents at partial scale. Twenty-three percent are running pilots. Sixty-one percent are still exploring. Seventy-one percent of European executives say they cannot fully trust autonomous agents.

The trust gap is not produced by hallucination. It is produced by the absence of the layer that constrains the agent. European compliance officers, especially those exposed to GDPR enforcement and DORA, have been trained to demand evidence that survives outside the vendor. Generic chat copilots and unmonitored tool-use agents do not produce that evidence. The compliance officer is therefore right to refuse the agent. The refusal is rational. The refusal is also expensive — for the bank, the insurer, the public administration that cannot answer a citizen request because the assistant cannot be authorised against an audit trail.

The European agent risk list, when written by people who actually deploy agents, is short. Data exfiltration through an ungated tool call. Prompt injection inside a customer-service ticket that escalates an agent into a privileged action. An agent given access to an Outlook calendar, a CRM, a payments rail, that writes to the wrong row at three in the morning. A developer team shipping an agent with no record of what it actually authorised, no replay path, no signed receipt of the verdict. The corrective list is the same in every language: tool-use permissioning with capability tokens, signed receipts that survive vendor exit, sandboxed compute with no network or filesystem reach, scoped delegation with a chain of custody, fail-closed default at the policy gate. None of these are exotic. None of them are required by name in any provision of the AI Act.

The receipt is the vow.

7. What guarded execution actually is

Guarded execution is the layer between the model, the user, the data, the tools. It evaluates every tool call against a signed policy bundle. It returns a verdict — allow, deny, escalate — before the call leaves the boundary. It writes a receipt that names the caller, the action, the policy, the result. It refuses to fail open when the policy cannot evaluate. It produces an evidence pack that can be replayed offline against the original bundle, byte for byte, by an auditor who does not trust the vendor.

The category was named publicly when Microsoft released the Agent Governance Toolkit on 2 April 2026. The European response surface has so far been thin: Mistral’s enterprise platform Le Chat Enterprise covers identity and tenant separation but not policy-evaluated tool-call gating. Aleph Alpha’s PhariaAI shipped governance hooks for the German civil service before the merger with Cohere. Black Forest Labs occupies the model layer. None of these is a substrate.

HELM is the working example I know best because the Mindburn Labs team ships it. The benchmark artifact in the open-source repository records sub-millisecond p99 latency on the governed hot path. The pipeline is verified in TLA+. The OWASP Agentic Top 10 coverage is full at ten of ten. Receipts are signed with Ed25519 over a JCS-canonical manifest, offline-verifiable. The kernel and reference packs ship as Apache-2.0 at github.com/mindburn-labs/helm. The point is not that one project owns the category. The point is that the category exists, has multiple credible implementations, is the part of European AI that the regulation governs without naming.

A continent that regulates the model and not the boundary has not described the system.

8. Economic upside

The economic case for European guarded execution is the case for unblocking the procurement officers who currently say no.

Capgemini’s data implies that the gap between the thirty percent that have adopted generative AI and the two percent that have deployed agents at scale is the AI value gap. Closing even part of that gap — moving twenty percent of European organisations from pilot to scale — would book hundreds of billions of euros of cumulative output across a decade. The McKinsey range of seven percent GDP uplift assumes American-style deployment maturity. European maturity is currently lower, which makes the uplift slower. It also makes the uplift larger when it lands.

The sectors that move first will be financial services, where compliance officers can read a TLA+ proof if it is presented to them; pharmaceuticals, where the FDA-equivalent EMA already inspects training-data provenance; public administration, where the citizen-facing assistant requires a paper trail by default. The sector that will move last is the smaller manufacturing tier of the German Mittelstand, where the Bitkom 2025 data shows fifty-three percent of small to medium firms cite legal uncertainty as a top barrier and fifty-three percent cite a technical know-how gap. The Mittelstand will not adopt agents on commercial-cloud chat platforms. It will adopt agents inside a substrate that issues receipts.

9. Where capital is flowing — and where it isn’t

Mistral has raised 1.7 billion euros at 11.7 billion euros post-money in September 2025, with a follow-on round in March 2026. Black Forest Labs raised 300 million dollars at 3.25 billion in December 2025. Cohere announced its acquisition of Aleph Alpha in April 2026 at roughly 20 billion dollars enterprise value, with a 600-million-euro Schwarz Group commitment for sovereign deployment. Helsing raised 600 million euros at twelve billion euros in June 2025 — the largest European defence-tech round.

Almost all of this is the model layer or the application layer above it. The boundary layer — the substrate that produces the audit trail — has attracted negligible European venture capital at scale. Lakera operates in Switzerland with a perimeter posture. Sintra plus a small number of governance-feature startups are building application-layer orchestration. None of them are publicly capitalised at the level the EU AI Act high-risk regime will require by August 2026.

The capital flowing to European AI is buying two things: model intelligence and serving capacity. The thing it is not buying is the place an autonomous agent is actually held to account. The InvestAI fund, structured at sovereign scale, would be the right instrument to underwrite the substrate. So far it has been structured to fund compute.

The receipt names the layer that pays the bill.

10. What this costs founders, developers, enterprises

European founders building AI products in regulated sectors spend a quarter of their early engineering budget rebuilding a homegrown policy and audit layer that an open-source substrate would solve. They do this twenty-seven times — once for each Member State’s reading of GDPR Article 22, the Digital Services Act, the AI Act, sectoral law, the national tort baseline that filled the gap left by the withdrawn Liability Directive. The same rebuild is paid for by the next enterprise customer, then the third, then the fourth. By the time a European AI startup has crossed five enterprise customers, the founders have written more legal-engineering glue than product. Many do not survive the round-trip and sell to a US acquirer who already paid for the equivalent layer once.

European developers paid the cost when the Capgemini data showed seventy-one percent executive distrust of agents. Distrust is not laziness. It is the rational response of a procurement officer who cannot reconstruct what the assistant authorised. The fix is the substrate. The response so far has been to slow procurement.

European enterprises paid the cost when the AI Act’s high-risk timeline collided with the absence of a published harmonised standard for agent runtime governance. Compliance teams have written internal SOPs that mostly forbid the deployment of agents in high-risk contexts until a vendor-attestation regime exists that they can read. The vendor-attestation regime depends on a specification that does not yet exist.

The bill is the unwritten one.

11. What the Union should do instead

A working European policy is not a longer one. It is a more accurate one.

First, the European Commission should commission CEN-CENELEC JTC 21 to draft a harmonised standard for an agent execution boundary specification — tool-use permissioning, signed receipts, offline-replayable evidence packs, fail-closed default semantics, human-approval escalation — calendared to publish before the August 2026 high-risk deadline rather than after it. The OWASP Agentic Top 10 is the threat list. JTC 21’s job is the corresponding control list, in a form that translates into a CE mark.

Second, the European AI Office should require the boundary specification in any sectoral flagship under the Apply AI Strategy. The strategy’s three pillars depend on a deployment surface that produces receipts. Without the surface, the flagships will be pilots that do not survive procurement.

Third, the InvestAI fund should be partially redirected from compute to substrate. A fraction of one Gigafactory’s capex underwrites a European open-source guarded-execution stack at scale. The compute will be built whether the fund pays for it or not. The substrate will not.

Fourth, the Union should pass a federal-style safe harbour through ordinary legislative procedure for organisations that deploy AI through a qualifying guarded execution layer. The harbour reduces tort exposure for compliant deployments and creates the missing economic incentive that the withdrawn Liability Directive was supposed to provide. It is the regulatory move with the highest expected value per article of legislation.

Fifth, Member States should agree a model statute for agent runtime evidence retention, drafted at the European AI Office and adopted as a directive rather than a regulation, to allow Member State implementation without re-litigating the substantive baseline. The retention obligation is operationally light. It is regulatorily heavy.

Sixth, sovereign and Member-State capital should fund European open-source HELM-class platforms the way the EU funded the early Galileo programme. Most of European AI’s deployment cost will be paid out of operating budgets that have no slack for governance research. Public capital is the right capital to underwrite the substrate. The British plus the Danes have already partly committed. The French plus the Germans should not need to be persuaded.

Seventh, Brussels should formally separate creative-industry IP regimes from enterprise AI runtime governance in the next legislative cycle. The collecting societies have grievances and remedies. The Mittelstand has tooling and almost none. The two cohorts are being regulated as if they were one.

Eighth, the European AI Office should run regulator-training programmes on actual system architecture before officials are asked to draft a delegated act. The position that training-data legality and model openness are the bottleneck has been foreclosed by every system that shipped capability without a record.

The list is not a wish. It is the minimum the Union owes its own enterprises.

12. Verdict

Europe wrote the rulebook. Europe did not build the stack. The Member States and the Commission will spend the next two years discovering that the rulebook is unenforceable against a layer that does not exist. The continent that figures out the substrate first will own the next decade of European AI. So far that continent looks like the United States.

Most of European industry is governable.

Most of the boundary is not yet built.

Build the boundary.

References

  1. EU AI Act timeline and applicability — European Commission.
  2. AI Act Article 99 penalties — artificialintelligenceact.eu.
  3. AI Liability Directive withdrawn — IAPP, February 2025.
  4. Apply AI Strategy — European Commission, October 8, 2025.
  5. EuroHPC AI Factories network.
  6. EU AI Gigafactories analysis — Interface EU.
  7. CEN-CENELEC JTC 21 standards update, October 2025.
  8. Stargate Norway — OpenAI announcement.
  9. Mistral AI Series C — CNBC, September 2025.
  10. Black Forest Labs Series B — TechCrunch, December 2025.
  11. Cohere acquires Aleph Alpha — TechCrunch, April 2026.
  12. Capgemini Generative AI in Organizations 2025.
  13. State of European Tech 2025 — Atomico and Sifted.
  14. GEMA v Suno landmark hearing, March 2026 — Music Business Worldwide.
  15. Bartz v Anthropic settlement — Copyright Alliance, September 2025.
  16. HELM open-source repository, Mindburn Labs, Apache-2.0.
  17. Microsoft Agent Governance Toolkit (AGT) — Microsoft Open Source Blog, April 2026.