1. The summit and the silence

On 10 February 2025 at the Grand Palais President Macron announced 109 billion euros of private and foreign AI investment in France over the coming years. The headline figure was a UAE-anchored datacentre fund of 30 to 50 billion, Brookfield at 20 billion, Amazon at 6 billion by 2031, Digital Realty up to 6, Equinix at 630 million, Fluidstack at 10 for a French national supercomputer. The summit produced more capital commitment per square metre of palace than any AI event in European history.

Eight weeks later the European Commission quietly withdrew the AI Liability Directive on 11 February. Six months after that, on 8 October 2025, Brussels published the Apply AI Strategy with a “buy European” pillar and a sectoral flagship plan that depended, operationally, on a deployment substrate the Commission did not specify. France hosted the summit. France did not specify the substrate either.

This piece argues that France is the European country with the most credible sovereign-AI capital strategy, the most aggressive frontier-model bet, the most culturally serious copyright posture, the smallest published specification for what enterprise AI deployment governance actually requires inside French firms. The capital is real. The frontier model is real. The deployment layer that turns the model into a governed system inside a French bank, hospital, public administration is not.

The piece refuses to treat sovereign compute as a substitute for that layer. It refuses to treat copyright defence of cultural industries as a proxy for enterprise AI risk. The substrate is what France can build with what it already has.

2. The capital map

Mistral closed a 1.7 billion euro Series C at 11.7 billion euros post-money in September 2025, with ASML as lead and a roster including DST, Andreessen Horowitz, Bpifrance, General Catalyst, Index, Lightspeed, Nvidia. A follow-on raise of roughly 830 million dollars in March 2026 funded datacentres near Paris and in Sweden. The combined firepower puts Mistral closer to credible-frontier than any other European foundation-model company.

H Company, founded in 2024, closed a 220 million dollar seed in May 2024 — at the time the largest European AI seed — backed by Schmidt, Amazon, Accel, Bpifrance, UiPath, Eurazeo, Niel, Milner, Arnault, Samsung. Three of the five co-founders departed in August 2024. The firm acquired Mithril Security in May 2025 and expanded its agentic offering in June. Hugging Face is headquartered in Paris and remains the centre of the European model-weights distribution layer. FlexAI, Poolside, LightOn populate the second wave.

The French Tech vendor base raised 8.2 billion euros across 686 rounds in 2025 according to French Tech Journal. AI took 62.5 percent of that, 5.18 billion euros, with Mistral as the dominant single line item. Paris took 47 percent of rounds and 64 percent of capital. The geography is centralised. The vendor stack is concentrated.

The number that does not appear in this map is a deployment-substrate raise. There is no French company at scale building the boundary that turns Mistral’s model into a governed enterprise system. Le Chat Enterprise covers identity and tenant separation. It does not cover policy-evaluated tool-call gating with signed receipts. Mistral could ship the substrate. The market has not asked for it explicitly enough to make it the priority.

3. The Action Summit and the strategy

The AI Action Summit at Paris in February 2025 produced four operational deliverables: the Statement on Inclusive and Sustainable AI signed by sixty-one countries; the Coalition for Sustainable AI; a 400-million-euro public-AI partnership; the 2.1-billion-euro France 2030 AI tranche that anchored the public side of the 109-billion announcement. The Stratégie Nationale pour l’Intelligence Artificielle, originally adopted in 2018, was refreshed by reference. The standalone Strategic Committee for AI report, the IA Commission’s March 2024 work, framed the move toward applied deployment without specifying the runtime governance specification underneath.

The summit’s framing positioned France as Europe’s reply to American Stargate. The reply is genuine on capital and partial on architecture. Fluidstack’s 10-billion-euro French supercomputer deal is the largest single national-compute commitment in the package. The supercomputer is being built. The control-plane that makes it usable by regulated enterprises is not part of the public commitment.

CNIL completed its AI-and-GDPR guidance set on 22 July 2025 with thirteen sheets covering annotation, security, the question of when an AI model contains personal data, the February 2025 additions on subject information and rights facilitation. The guidance is among the most operationally specific in Europe. It is upstream of the substrate. CNIL guidance describes what a deployer should do. It does not specify the runtime that produces the evidence the deployer needs to prove they did it.

4. Hollywood, Paris, the lawsuit machine

The French creative-industry response has been institutional, coordinated, louder than the British. SACEM joined GEMA in coordinating European-level enforcement against Suno and Udio. The Goldmedia study commissioned by GEMA and SACEM puts cumulative German plus French rightsholder losses by 2028 at 2.7 billion euros. The CSPLA, the Conseil supérieur de la propriété littéraire et artistique, has produced a series of reports framing France’s negotiating posture against the AI Act’s text-and-data-mining exception. The cultural-industries lobby in France is unusually well-organised; the political weight of the Centre national du cinéma extends into AI policy.

The Anthropic settlement in Bartz in September 2025 — 1.5 billion dollars for the pirated training corpus — was paid in California and read at SACEM as the new floor for licensing negotiations. The implicit French position is that the upstream training step will be repaired by a combination of court rulings, the Suno/Udio litigation, direct corpora deals like the OpenAI–Le Monde model. That position is sound for cultural industries. It is the wrong frame for a French bank deploying an autonomous customer-service agent.

The lawsuit machine is real. It does not describe the agent runtime risk inside Crédit Agricole or BNP Paribas. It describes the upstream training step.

5. The conflation, French edition

A copyright dispute over training data is a question of upstream consent. An enterprise AI deployment problem is a question of downstream authority. Paris has produced an excellent framework for the first question and almost none for the second.

The French regulatory instinct, formed by GDPR enforcement and the Digital Republic Act, treats AI as a privacy and IP problem first and a system-architecture problem second. The instinct is well-trained on inputs. It is under-trained on outputs. When a French bank deploys an agent across a CRM, a calendar, an outbound payment channel, the disputed event is not whether a copyrighted work entered the model. The disputed event is who authorised the agent to act on behalf of which client, on what evidence, with what audit trail. The cultural-industries IP frame does not answer that question. CNIL guidance points at it. The AI Act vocabulary describes it through product-safety analogues that fit imperfectly.

The collapse: French AI policy has the same instrument for a Suno lawsuit and for a BNP Paribas deployment. The instrument was designed for upstream consent. The deployment requires downstream authority. The two regimes need different remedies.

When the question is the wrong question, the answer is sectoral.

6. Surface fears versus operational reality

The Capgemini Generative AI in Organizations 2025 European data carries to France: thirty percent adoption, two percent at-scale agent deployment, twelve percent partial scale, twenty-three percent pilot, sixty-one percent exploring, seventy-one percent executive distrust of agents.

The French operational risk list, written by the security teams at Société Générale and Crédit Agricole, the CISOs at AP-HP and the public-data agencies, is the same list the British and Germans wrote.

It is data exfiltration through an ungated tool call inside a customer-relationship assistant. It is prompt injection inside a customer-service ticket that escalates an agent into a privileged action across an Outlook tenant. It is an agent given access to a CRM, a calendar, an outbound email, a payments rail, that writes to the wrong row at three in the morning. It is a development team that ships a chatbot with no record of what it actually authorised, no replay path, no signed receipt. It is a workforce that brings its own AI to work — globally seventy-eight percent — and a security organisation that has no telemetry on what the personal model saw. It is a CNIL-aware compliance officer who refuses the deployment because the rights-of-data-subjects readout cannot be produced from the agent’s existing logs.

The fix is not better prompts. The fix is a default-deny on attention with explicit allowlists for what enters the queue, a record of every refusal, a receipt for every grant. The corrective list maps cleanly onto the OWASP Agentic Top 10. It is not exotic. It is not in France 2030.

The receipt is the vow.

7. What guarded execution actually is

Guarded execution is the layer between the model, the user, the data, the tools. It evaluates every tool call against a signed policy bundle. It returns a verdict — allow, deny, escalate — before the call leaves the boundary. It writes a receipt that names the caller, the action, the policy, the result. It refuses to fail open when the policy cannot evaluate. It produces an evidence pack that can be replayed offline against the original bundle, byte for byte, by an auditor who does not trust the vendor.

The category was named publicly when Microsoft released the Agent Governance Toolkit on 2 April 2026. The French response so far is thin. Mistral’s Le Chat Enterprise plus a small set of enterprise-orchestration startups occupy the space above the model. None of them ships a substrate at the level CNIL, the ACPR, or the EU AI Act high-risk regime will need by August 2026.

HELM is the working example I know best because the Mindburn Labs team ships it. The benchmark artifact in the open-source repository records sub-millisecond p99 latency on the governed hot path. The pipeline is verified in TLA+. The OWASP Agentic Top 10 coverage is full at ten of ten. Receipts are signed with Ed25519 over a JCS-canonical manifest, offline-verifiable. The kernel and reference packs ship as Apache-2.0 at github.com/mindburn-labs/helm. The point is not that one project owns the category. The point is that the category exists, has multiple credible implementations, is the part of French AI that France 2030 does not name.

A country that funded the model and not the boundary has built half a stack.

8. Economic upside

The French upside is concentrated in the regulated incumbents that already buy enterprise software at scale: BNP Paribas, Société Générale, Crédit Agricole on the bank side; Sanofi, Servier on pharmaceuticals; Total, Engie on energy; Orange, Bouygues on telecom; Carrefour on retail; AP-HP on hospital systems; the Direction interministérielle du numérique on the public-service-delivery layer.

BNP Paribas reports an internal LLM-as-a-Service platform across business lines, an ESG Assessment generative AI assistant used by roughly six hundred relationship managers across approximately 1,600 assessments, a group target of 500 million euros AI-attributable value by 2025 under the Growth, Technology, Sustainability 2025 plan. Crédit Agricole has reported AI-driven credit-risk reductions in segment-specific marketing. The point is not that French banks are behind their British counterparts on adoption volume. The point is that they are at parity in pilot work and behind on substrate procurement, which means their ability to scale agentic deployments is gated by the same missing layer.

Closing the gap between the thirty percent adoption rate and the two percent at-scale rate on a French enterprise base produces a low-tens-of-billions cumulative output increment over a decade. The arithmetic is similar to the British and the German. The export angle is different: France can sell the cultural-industries IP frame to other Latin and francophone jurisdictions; it can also sell the substrate, if it builds it, to the Commission’s preferred-vendor flagships.

9. Where capital is flowing — and where it isn’t

The 109 billion euro headline at the AI Action Summit is largely directed at compute and model layers. Brookfield, the UAE-anchored fund, Amazon, Digital Realty, Equinix, Fluidstack are buying buildings, GPUs, megawatts. Mistral is buying frontier inference. Hugging Face is buying weights distribution. H Company is buying agentic application layer. The numbers compete with American hyperscaler quarters, which is a meaningful new fact about Europe.

The number that does not appear in the package is a substrate raise. There is no line item in the 109 billion that funds an open-source guarded execution layer at French scale. Bpifrance has the institutional capacity to underwrite one. France 2030 has the political mandate. CNIL has the technical authority to publish the specification. None of these instruments has yet been pointed at the substrate. The InvestAI fund at Brussels could backstop one. So far it has been structured to fund compute.

The capital flowing to French AI is buying two things: model intelligence and serving capacity. The thing it is not buying is the place an autonomous agent is held to account on French regulatory terms.

The receipt names the layer that pays the bill.

10. What this costs founders, developers, enterprises

French founders building AI products in regulated sectors spend a quarter of their early engineering budget rebuilding a homegrown policy and audit layer that an open-source substrate would solve. They do this against CNIL guidance, against Banque de France supervisory expectations for ACPR-regulated firms, against Article 22 of GDPR as interpreted by French courts. The rebuild is paid for again with each enterprise customer that asks the same question through a slightly different procurement reading.

French developers paid the cost when the Le Chat Enterprise rollout reached scale and the security teams at the early adopters discovered that the platform did not produce the readouts CNIL would require under a strict reading of the rights-facilitation guidance. The fix is not a Le Chat ban. It is a permissioned proxy with signed receipts. The response so far has been sectoral SOPs that mostly forbid customer-facing deployment.

French enterprises paid the cost when seventy-eight percent of their AI users brought a personal model to work and the chief information security officer at one of the listed banks learned about it from a Microsoft research report rather than from internal logs. The remediation budget has been authorised. The remediation architecture has not.

The bill is the unwritten one.

11. What the country should do instead

A working French policy is not a longer policy. It is a more precise one.

First, France 2030 should announce a substrate line — an explicit allocation, on the order of one to two percent of the 2.1-billion-euro AI tranche — earmarked for an open-source guarded execution layer domiciled in France, available to French enterprises under European software licences. The scale required is small. The return is asymmetric.

Second, CNIL should publish a position paper specifying the technical artefacts a controller deploying autonomous agents must retain to produce a defensible Article 22 readout. The artefact list maps directly onto the boundary specification: signed receipts, replayable evidence packs, fail-closed defaults, scoped delegation. CNIL is the European data-protection authority best positioned to write this position.

Third, the Direction interministérielle du numérique should require the boundary specification in every public-sector AI deployment funded under France 2030 and the Service Public Plus programme. Public procurement is the largest single lever for adoption pace. Whitehall, the BNetzA, the Commission have not yet written this requirement; France could.

Fourth, the Conseil d’État, paired with the Cour de cassation, should issue an early note clarifying that deployment of autonomous agents through a qualifying guarded execution layer creates a presumption of compliance with applicable downstream supervisory expectations. The note is the closest French-law analogue to a statutory safe harbour. It is doctrinally available without legislation.

Fifth, the ACPR should publish a supervisory expectation requiring authorised firms running customer-facing autonomous agents to retain receipts of every authorised tool call for a minimum period, mirroring the FCA position the British have not yet taken. The expectation is operationally light. France would be the first regulator in the European Union to specify it.

Sixth, the CSPLA should formally separate the cultural-industries IP regime from the enterprise AI runtime governance regime in its next set of recommendations. The collecting societies have grievances and remedies. The Mittelstand-equivalent French ETI cohort has tooling and almost none. The two cohorts are being regulated as if they were one.

Seventh, Bpifrance should fund open-source HELM-class platforms domiciled in France on a non-dilutive basis, in coordination with the InvestAI fund at Brussels. Most French AI’s deployment cost will be paid out of operating budgets that have no slack for substrate research. Public capital is the right capital. The European-level coordination removes single-country risk.

Eighth, the École Polytechnique, ENS, INRIA, Mines should run regulator-training programmes on actual system architecture before officials are asked to draft sectoral law. The position that frontier-model spend alone is the bottleneck has been foreclosed by the gap between Capgemini’s thirty-percent adoption number and its two-percent at-scale number.

The list is not a wish. It is the minimum that turns the AI Action Summit into a deployment-grade reality.

12. Verdict

France hosted the summit. France raised the capital. France named the cultural-industries posture. France did not specify the substrate. The country that funded the frontier-model layer first will need to fund the boundary layer second, on smaller numbers, with more discipline, faster. The substrate is the missing line item.

Most of French AI is governable.

Most of the boundary is not yet built.

Build the boundary.

References

  1. AI Action Summit Paris, February 2025 — Élysée.
  2. France's 109 billion AI package — Bloomberg, February 2025.
  3. Mistral AI Series C — Mistral.
  4. Mistral $830M follow-on, March 2026 — TechCrunch.
  5. H Company $220M seed — TechCrunch.
  6. French Tech 2025 funding report.
  7. CNIL AI guidance, July 2025 — CNIL.
  8. BNP Paribas LLM as a Service.
  9. GEMA-SACEM Goldmedia study — CISAC summary.
  10. AI Liability Directive withdrawn — IAPP.
  11. EU AI Act timeline — European Commission.
  12. Capgemini Generative AI in Organizations 2025.
  13. Bartz v Anthropic settlement — Copyright Alliance.
  14. Microsoft Agent Governance Toolkit.
  15. HELM open-source repository.