1. The gap
Anthropic agreed to pay 1.5 billion dollars in September 2025 for the books it had downloaded from a pirate library to train its model. The same week, a junior associate at a Manhattan law firm was editing a brief inside an incognito window because her firm had banned the model that wrote it.
Both stories are about American AI in 2026. Only one of them is what most regulators talk about.
The country that sells the world its frontier models has a legal apparatus repairing copyright claims at industrial scale and a deployment apparatus that mostly does not exist. The first apparatus is loud, well-funded, and dominated by entertainment industry plaintiffs whose grievances are real and whose grievances are also bounded. The second apparatus is quiet, badly capitalised, and matters more. This piece argues that the United States is solving the wrong problem with most of its policy oxygen, and that the country it could become — the country that exports guarded execution at the layer beneath the model — is the country it is currently underfunding.
The piece refuses three things. It will not treat creative-industry copyright disputes as a proxy for enterprise AI risk. It will not pretend that frontier-model investment is the same as enterprise deployment infrastructure. It will not propose a regulatory frame written by people who do not know how a tool call is brokered.
2. The American AI build-out
OpenAI closed a primary round at an 852-billion-dollar valuation on 31 March 2026, raising 122 billion in a single transaction. Anthropic completed a 30-billion-dollar Series G at 380 billion post-money in February. xAI raised 20 billion at 230 billion in January and then merged with SpaceX inside a combined entity reported at 1.25 trillion. Google’s Gemini 3 family shipped in November 2025. Meta’s Llama 4 launched in April 2025 and underperformed on independent benchmarks; the company then paid 14.3 billion to acquire Scale AI for the talent.
The four largest hyperscalers will spend between 650 and 700 billion dollars on AI-related capital expenditure in 2026, an increase of roughly 60 percent year on year. Amazon is reported at 200 billion. Alphabet at 175 to 185. Meta at 115 to 135. Microsoft above 120, with an 80-billion Azure backlog blocked by available power. The Stargate project — announced 21 January 2025 with OpenAI alongside Oracle, SoftBank, the UAE’s MGX — committed 500 billion over four years; the first 1.2-gigawatt site in Abilene, Texas came online in September 2025; the UK build was paused on 9 April 2026 over energy costs.
The federal posture matched the capital. President Trump revoked Executive Order 14110 on 20 January 2025, signed Executive Order 14179 three days later, released the AI Action Plan on 23 July 2025, and issued OMB memoranda M-25-21 and M-25-22 on 3 April 2025 to push procurement and acceleration. The NIST AI Safety Institute was renamed the Center for AI Standards and Innovation in June 2025, with a mandate that tilted from safety language toward national-security and competitiveness language. The Bureau of Industry and Security replaced the AI Diffusion Framework in May 2025 and shifted the latest export-control rule from “presumption of denial” to “case-by-case” review for the H200-class accelerator and its peers in January 2026.
This is a country that builds AI. It is not yet the country that deploys it.
3. Enterprise AI adoption — wide and shallow
McKinsey’s 2025 State of AI survey reports that 88 percent of organisations now use AI in at least one function and 72 percent use generative AI. The number that says it has captured measurable EBIT impact from generative AI is 17 percent. The number that has crossed five percent EBIT attributable to generative AI is roughly six percent of the surveyed population. BCG’s 2025 study reports that five percent of large enterprises are “future-built” and that 60 percent are stuck in invest-without-returns territory; leaders deliver twice the revenue growth of laggards by concentrating on three or four use cases instead of spreading across six.
The deployment that has happened is concentrated in software, finance, healthcare, back-office operations. JPMorgan’s LLM Suite reaches more than 200,000 employees and rotates models on an eight-week refresh cycle. Walmart announced an AI rollout to 1.5 million frontline associates in June 2025. Epic, the dominant electronic-health-record vendor, reports that 85 percent of its customer base is live with at least one generative AI feature; the in-product “Insights” tool is exercised more than 16 million times per month. The FDA’s running list of AI-and-machine-learning-enabled medical devices crossed 1,451 cumulative authorisations at year end 2025, with 295 cleared in the calendar year alone. GitHub reported 4.7 million paid Copilot subscribers as of January 2026, a 75-percent year-over-year jump; 80 percent of developers who join GitHub now use Copilot in their first week. Harvey, the legal AI workspace, raised 200 million dollars at an 11-billion valuation in March 2026 on roughly 190 million in annual recurring revenue, with paid deployments inside 50 of the AmLaw 100.
The pattern is broad and shallow. Most companies have authorised use; few have measurable economic effect. The gap is in the layer between the model and the workflow — the layer that decides what an AI is allowed to do, the layer that records what it actually did, the layer that escalates when something is wrong. That layer is where the article keeps returning.
4. Hollywood, music, the lawsuit machine
The list is now long enough to read as evidence in itself.
December 2023: New York Times v. Microsoft and OpenAI lands in the Southern District of New York; in March 2024 the motion to dismiss is largely denied and direct infringement, contributory infringement, DMCA section 1202 claims survive. June 2025: Judge Alsup rules in Bartz v. Anthropic that training on lawfully purchased books is “quintessentially transformative” and therefore fair use, but holds that Anthropic’s central library of pirated LibGen and Pirate Library Mirror titles is infringement bound for trial. September 2025: Anthropic settles for a minimum of 1.5 billion dollars — the largest copyright settlement on the public record — paying roughly 3,000 dollars for each of approximately 500,000 covered works, with preliminary approval on 25 September. The same June, Judge Chhabria grants Meta a narrow summary judgment in Kadrey on procedural grounds and uses the order to hand the next 13 plaintiffs a roadmap built on a market-dilution theory the court invited them to plead. October 2023: Concord Music Group sues Anthropic in Tennessee over Claude’s reproduction of song lyrics; in January 2026 the publishers expand the action to more than 20,000 works and seek over 3 billion in damages. June 2024: Universal Music alongside Sony, Warner sue Suno in Massachusetts and Udio in the Southern District of New York; the next dispositive ruling is calendared for 12 June 2026.
The system the list belongs to is American copyright as an industrial revenue mechanism. The cost is paid in two currencies: a transfer payment from frontier labs to rightsholders that will be folded into model prices, and a reservoir of legal anxiety that has trained executives to treat every AI question as a copyright question.
The reservoir is the part regulators read.
5. The conflation
A copyright dispute over training data is a question of upstream consent. An enterprise AI deployment problem is a question of downstream authority. The two are different problems with different failure modes, and the second is being underwritten by the lessons of the first, which do not transfer.
When the New York Times sues OpenAI, the disputed event is Did the model ingest a copyrighted work without a license? The remedy is licensing, settlement, or injunction at the training step. When a Fortune 500 deploys an AI agent that drains a customer database to a stranger, the disputed event is Why did the agent have permission to do that, and where is the receipt that someone authorised it? The remedy is policy enforcement at the tool-call boundary, an audit trail that survives outside the vendor, a refusal default when the policy cannot evaluate.
The collapse: the legal apparatus that handles training-data infringement is the same legal apparatus that should handle agent runtime authority — and it cannot, because the training claim asks who provided the input and the runtime claim asks who is responsible for the output. Most of the policy infrastructure being built in 2026 is built for the first question and is being asked to answer the second. It will fail.
The Bipartisan House AI Task Force Report of December 2024 was, on this point, careful: 89 findings and 66 recommendations advanced a sectoral approach and refused omnibus legislation. The work since then has not held that line. The active state bills — Colorado SB 24-205, now delayed to 30 June 2026; Texas TRAIGA, effective 1 January 2026; the patchwork in California, Utah, Illinois — are largely written in the vocabulary of disclosure, bias, high-risk classification borrowed from European templates. None of them define what an audit trail for an autonomous agent must contain.
When the question is the wrong question, the answer is decoration.
6. Surface fears versus operational reality
The conversation at the federal level treats AI risk as a list dominated by replacement, hallucination, copyright, deepfake, bias. Three of those are real harms with real plaintiffs. None of them describe what breaks first when an enterprise actually deploys an autonomous agent against a production system.
What breaks first is more boring.
It is data exfiltration through a tool call that the policy engine never gated. It is prompt injection inside a customer-support ticket that escalates an agent into a privileged action. It is an agent that has been given access to a CRM, a calendar, an outbound email account and then writes to the wrong row at three in the morning. It is a developer team that ships a chatbot with no record of what it actually authorised, no replay path, no signed receipt of the verdict. It is a workforce in which 78 percent of AI users — by Microsoft and LinkedIn’s own Work Trend Index — bring their own model to work because the corporate model has been blocked, and a security organisation that has no telemetry on what the personal model saw. It is a board that signs an indemnification clause without knowing what an indemnification clause for an autonomous agent should cover.
The dread before opening Monday’s inbox is not laziness. It is the cost of running a queue with no triage policy. Every message is a possible authorisation request. Every authorisation request is a possible debt the system has not priced. Inside a Fortune 500 with an unmanaged AI surface, the same logic runs at scale: every prompt is a possible authorisation request. Every authorisation request is a possible breach the audit log will not be able to reconstruct. The fix is not better prompts. The fix is a default-deny on attention with explicit allowlists for what can enter the queue, a record of every refusal, a receipt for every grant.
The list of operational risks is short. It maps cleanly onto the OWASP Agentic Top 10. Each item has a known mitigation in the literature: tool-use permissioning with capability tokens, signed receipts that survive vendor exit, sandboxed compute with no network or filesystem reach, scoped delegation with a chain of custody, fail-closed default at the policy gate. The mitigations are not exotic. They are not being adopted at scale because the people writing the rules have not read them.
The receipt is the vow.
7. What guarded execution actually is
Guarded execution is a layer between the model, the user, the data, the tools. It evaluates every tool call against a signed policy bundle. It returns a verdict — allow, deny, escalate — before the call leaves the boundary. It writes a receipt that names the caller, the action, the policy, the result. It refuses to fail open when the policy cannot evaluate. It produces an evidence pack that can be replayed offline against the original bundle, byte for byte, by an auditor who does not trust the vendor.
The category was named publicly when Microsoft released the Agent Governance Toolkit on 2 April 2026. Forrester and other analysts have begun classifying agent control planes as a distinct infrastructure layer. Faramesh Labs ships a kernel-level enforcement model anchored in seccomp-BPF and Landlock. Lakera, Pangea, and a small number of others sell perimeter controls against prompt injection and exfiltration. Observability vendors like LangSmith and Arize occupy an adjacent layer that records but does not enforce.
HELM is the working example I know best because Mindburn Labs ships it. The benchmark artifact in the open-source repository records sub-millisecond p99 latency on the governed hot path; the pipeline is verified in TLA+; the OWASP Agentic Top 10 coverage is full at ten of ten; receipts are signed with Ed25519 over a JCS-canonical manifest and are offline-verifiable; the kernel and reference packs are Apache-2.0 at github.com/mindburn-labs/helm. The point is not that one project owns the category. The point is that the category exists, has multiple credible implementations, is the part of American AI that no current bill, executive order, or state statute requires of the systems most likely to fail.
The country regulating models without naming the boundary has not described the problem.
8. Economic upside
The upside is large enough to be embarrassing if missed. The downside is being absorbed already, in pieces, in shadow form, by the same enterprises that will eventually pay for the cleanup.
Stanford HAI’s 2025 AI Index puts United States private AI investment at 109.1 billion dollars in 2024. Goldman Sachs and McKinsey have repeatedly modelled productivity uplift from generative AI in the seven-percent-of-GDP range over a decade for a fully deployed economy. The most defensible reading of the McKinsey gap — 88 percent using, 17 percent measurably benefiting — is that the gap closes only when companies stop running pilots and start running governed deployments. If even thirty percent of the “no measurable EBIT” cohort migrated to governed deployment over five years, the United States would book a single-digit-trillion-dollar increment in cumulative output. That outcome does not require a frontier-model breakthrough. It requires the boundary layer.
The sectors with the clearest upside are software, financial services, healthcare, professional services — not because they are glamorous, but because they have the audit-grade workflows that guarded execution can integrate with. Software development is already running ahead: Octoverse 2025’s headline number — half the global developer base on AI assistance — is the early signal of what happens when the deployment layer is good enough that procurement stops fighting it. Healthcare is running second, hostage to FDA classification cycles and HIPAA OCR guidance. Financial services is running third because compliance officers can read a TLA+ proof if you put it in front of them. Professional services is running where Harvey is.
The export angle matters. The United States can sell foreign enterprises a model. It can also sell them the boundary the model runs inside. The first is becoming a commodity. The second is not.
9. Where capital is flowing — and where it isn’t
Menlo Ventures’ December 2025 enterprise survey put 2025 enterprise generative AI spending at 37 billion dollars, split almost evenly: 19 billion to applications and 18 billion to infrastructure. That number triples 2024. The applications subtotal includes 7.3 billion on coding tools, 8.4 billion on general-purpose copilots, and 3.5 billion on industry-specific deployments led by healthcare. The infrastructure subtotal is dominated by foundation models, training compute, data tooling. Hyperscaler 2026 capital expenditure adds another 650 to 700 billion at the model and compute layers.
The number that does not appear is a credible figure for spending on the boundary layer. Gartner’s “AI TRiSM” sizing came in at low single-digit billions for 2025. AI security category investment — Lakera, Protect AI, HiddenLayer, Calypso, Pangea, plus the Cisco acquisition of an AI red-teaming startup for roughly 400 million in August 2024 — has accumulated to under one billion in private capital across the whole category. That is a rounding error compared to a single hyperscaler quarter. The federal share is harder to count cleanly: GSA’s AI buying is growing, the Department of Defense’s CDAO has expanded, OMB M-25-21 has nudged civilian agencies, but no public dashboard discloses how much of that procurement actually buys runtime governance versus application functionality.
The capital that is flowing to AI is mostly buying two things: model intelligence and serving capacity. The thing it is not buying is the place an autonomous system is actually held to account. The under-investment is not subtle. It is a structural mismatch between the volume of agents being deployed and the maturity of the layer that constrains them. The market will eventually correct. American policy could accelerate that correction by recognising the category exists. So far it has not.
The receipt names the layer that pays the bill.
10. What this costs founders, developers, enterprises
The cost shows up first in behaviour, then in numbers.
Founders building AI products in regulated sectors spend a quarter of their early engineering budget on a homegrown policy and audit layer that an off-the-shelf substrate would solve. They do this because no public standard exists for what an enterprise customer will accept, and because procurement teams default to in-house custom builds when they do not see a credible third-party option. The cost is real. It is also paid for the second time when the next enterprise customer asks the same questions and the founder rebuilds against a slightly different SOC 2 reading. The third rebuild is paid for by a customer that has churned.
Developers paid the cost the day GitHub Copilot crossed 4.7 million seats and the security teams discovered they had no instrumentation on what the assistants saw, suggested, or wrote out. The fix is not a Copilot ban; it is a permissioned proxy with signed receipts. Most of the response so far has been the ban.
Enterprises paid the cost in 2024 when 78 percent of AI users brought their own model to work and the chief information security officer learned about it from a Microsoft research report rather than from telemetry. The remediation budget has now been authorised at most Fortune 500 firms. The remediation architecture has not.
The associate inside the incognito window from the opening paragraph is now editing the brief inside an enterprise governed agent her firm has bought from a vendor that issues signed receipts and cooperates with discovery. Her firm could buy that. Most have not. The cost of that not is being paid in talent — junior associates churn faster when their tools are worse — and in market position when a peer firm with a working AI floor closes the same engagement in two thirds of the time. The legal market is small enough to model. Most other markets are larger.
The bill is the unwritten one.
11. What the country should do instead
A working policy is not a longer policy. It is a more accurate one.
First, the federal government should formally separate creative-industry intellectual property from enterprise AI runtime governance in any forthcoming legislation. The two regimes belong in different statutes, with different enforcement bodies. Conflation is the first failure mode and the most expensive.
Second, the National Institute of Standards and Technology — operating now as the Center for AI Standards and Innovation — should publish a reference specification for an agent execution boundary: tool-use permissioning, signed receipts, offline-replayable evidence packs, fail-closed default semantics, human-approval escalations. The existing OWASP Agentic Top 10 is the threat list. CAISI’s job is the corresponding control list.
Third, the General Services Administration should require the boundary specification in federal AI procurement under OMB M-25-21 and M-25-22. The federal buying power is large enough to create a reference market and to standardise enterprise AI procurement language across the country in the same way HIPAA did for healthcare information systems.
Fourth, Congress should pass a federal safe harbour for organisations that deploy AI through a qualifying guarded execution layer. The harbour reduces tort exposure for compliant deployments and creates an economic incentive that bias-disclosure regimes cannot. It is the regulatory move with the highest expected value per page of legislation.
Fifth, the Securities and Exchange Commission and the Federal Trade Commission should require enterprises that deploy autonomous agents in customer-facing or financial workflows to retain receipts and produce them on request. The receipt is already produced by every credible execution boundary. Mandating its retention does not require a new technical regime.
Sixth, state attorneys general should agree to a model statute — drawn from Colorado, Texas, California’s running text — that recognises a guarded execution deployment as a presumptive compliance posture. Without coordination, the patchwork will continue to advantage incumbents that can absorb 50 different lawyers’ readings of 50 different bills.
Seventh, sovereign and federal capital should fund open-source HELM-class platforms the way the National Science Foundation funded the early TCP/IP work. Most of American AI’s deployment cost will be paid out of operating budgets that have no slack for governance research. Public capital is the right capital to underwrite the substrate.
Eighth, regulators should be trained on actual system architecture before they are asked to write rules about it. The position that frontier-model spend alone is the bottleneck has been foreclosed by every system that shipped capability without a record. The position that disclosure regimes alone will produce safe deployment has been foreclosed by the December 2024 BCG data on leader-laggard divergence. Both positions are still in active circulation in Washington, which is why the rules being drafted continue to miss the layer.
The list is not a wish. It is the minimum that returns the policy conversation to the system being governed.
12. Verdict
The United States is the country that builds the model. It is not yet the country that runs it well. The lawsuits will settle, the executive orders will compound, the press cycle will keep treating Hollywood and the agent runtime as the same problem. They are not. The country that figures out the difference first will own the layer where the work actually happens.
Most of that work is American.
Most of the boundary is not yet built.
Build the boundary.
References
- Anthropic — Bartz v. Anthropic settlement, $1.5B, preliminary approval Sept 25, 2025.
- Bartz v. Anthropic, 3:24-cv-05417 (N.D. Cal.), Judge Alsup SJ ruling June 23, 2025.
- Kadrey v. Meta Platforms, 3:23-cv-03417 (N.D. Cal.), Judge Chhabria order June 25, 2025.
- New York Times Co. v. Microsoft Corp. and OpenAI, 1:23-cv-11195 (S.D.N.Y.).
- Concord Music Group v. Anthropic, originally M.D. Tenn. (filed Oct 18, 2023), transferred N.D. Cal., publishers expanded claims January 2026.
- UMG, Sony Music, Warner v. Suno (D. Mass., 1:24-cv-11611) and v. Udio (S.D.N.Y., 1:24-cv-04777), filed June 24, 2024.
- NO FAKES Act of 2025, S.1367 (Senate) and H.R.2794 (House), 119th Congress.
- Tennessee ELVIS Act, eff. July 1, 2024.
- Executive Order 14179, Removing Barriers to American Leadership in AI, signed January 23, 2025.
- America's AI Action Plan, July 23, 2025.
- OMB M-25-21 — Accelerating Federal Use of AI through Innovation/Governance/Public Trust — April 3, 2025.
- Cooley state AI law tracker, April 24, 2026.
- Texas Responsible Artificial Intelligence Governance Act (TRAIGA), signed June 22, 2025, eff. January 1, 2026.
- Akin Gump — Colorado AI Act delayed to June 30, 2026.
- NIST Center for AI Standards and Innovation.
- OpenAI primary round announcement, March 31, 2026, $122B raised at $852B valuation.
- Anthropic Series G announcement, $30B at $380B post-money, February 2026.
- Stargate Project announcement, OpenAI, Oracle, SoftBank, MGX, January 21, 2025.
- OpenAI announcement of five additional Stargate sites, September 23, 2025.
- Hyperscaler 2026 AI capex coverage, CNBC, February 6, 2026.
- McKinsey, The State of AI 2025.
- BCG, Build for the Future 2025: The Widening AI Value Gap, September 2025.
- Stanford HAI, AI Index Report 2025.
- Microsoft & LinkedIn, Work Trend Index 2024 — AI at Work Is Here.
- GitHub Octoverse 2025.
- Walmart AI rollout to 1.5M associates, June 24, 2025.
- JPMorgan LLM Suite reaching 200K+ employees, CNBC, September 30, 2025.
- Harvey $11B valuation, $200M raise, March 25, 2026.
- FDA AI/ML-enabled medical-device list, year-end 2025 totals.
- Menlo Ventures, 2025 State of Generative AI in the Enterprise, December 9, 2025.
- Microsoft Agent Governance Toolkit (AGT), open-source release, April 2, 2026.
- Bipartisan House AI Task Force Report, December 17, 2024.
- Gartner — 40 percent of enterprise apps will feature task-specific AI agents by 2026.
- HELM open-source repository, Mindburn Labs, Apache-2.0.