Export Controls, Trade Deals, and the Rise of Sovereign AI

Preview

This essay is normally available to members only, but I am making it free to read. Beginning this month, I will publish one long-form essay like this every month.


Export Controls, Trade Deals, and the Rise of Sovereign AI

The United States put a price on national security and called it a fee. A 15 percent export duty was floated on shipments of technology that had been banned months earlier. At the same time, officials threatened sweeping tariffs on semiconductors, on goods that contain them, and even on the tools that make chips; and they reopened sales of Nvidia’s H20 to China in the midst of tense bargaining over rare earths. Reports that summer described these moves in detail, showing how controls—once insulated from bargaining—were folded into trade talks. In a matter of months, national-security rules had become negotiable.

When rules are negotiable, partners do not stand still. Middle powers already wary of dependence accelerate hedges. Firms that dominate the operating layers of AI—semiconductors, cloud infrastructure, and foundation models—have few checks on vertical strategies that raise exit costs. States and enterprises facing those costs turn to “sovereign AI”: local gates on data, national rules for deployment, domestic copies of models, and contracts that fix jurisdiction. Markets fragment into nationalized stacks, and coalition discipline weakens. A multilateral regime depends on credibility; once credibility erodes, updates to controls slow and enforcement thins.

A study published in March 2025 by the Bank for International Settlements mapped five layers of the AI stack: hardware, cloud, training data, foundation models, and applications. It showed extreme concentration at the base. Nvidia held well over ninety percent of data-center GPU revenue. Three cloud providers—Amazon, Microsoft, and Google—dominated infrastructure services. Switching costs were high, reinforced by egress fees, license penalties, and vertical integration. The study also traced a reinforcing loop between cloud resources, proprietary data, and model development, a cycle that deepens lock-in. A system-wide outage caused by a single vendor update demonstrated how this concentration magnifies operational risk.

In July 2025, coverage of the H20 reversal made clear why the choice mattered. The chip in question was not a training flagship but an inference workhorse. Inference workloads were projected to require nearly five times the compute of training by 2026. The H20 ran inference faster than higher-end training parts, making it valuable precisely as models shifted toward inference-heavy tasks. Shipments in the millions meant billions in revenue and an acceleration of Chinese deployment capacity. Because the reversal came during rare-earth bargaining, it also set a precedent: critical components could be traded away in exchange for temporary commodity relief. That lesson will shape every future control update.

A month later, further reporting described the Section 232 probe expanding until nearly any product containing chips could be taxed, along with semiconductor equipment itself. Public comments from industry groups and allied governments warned that such moves would damage production capacity and strain partnerships. A former senior official remarked that if a fee can override national-security concerns, partners will no longer accept the rules as rules. Coalition strength depends on separating principle from price; without that separation, alignment unravels.

The appeal of sovereign AI grows under these conditions. By June 2025, essays surveying global initiatives described projects in France, Singapore, Saudi Arabia, and the United Arab Emirates. Vendors encouraged this direction, pitching national stacks as symbols of autonomy, while open-source models lowered barriers to entry. Even if these projects still rely on American semiconductors, clouds, and models, the political advantage of domestic control over deployment is clear. National gates on data and critical uses reduce exposure to external pressure, especially when foreign rules look unstable.

The BIS study explains why this hedge is rational. Exit from dominant providers is costly not only in money but in disruption across bundles of compute, storage, compliance, and access rights. Egress fees impose direct charges. License terms penalize migration. Vertical integration ties layers together. The cloud-model-data loop reinforces dependence. In such a structure, credibility becomes part of the cost model: when actors expect rules to shift, they pay for insurance by building sovereign capacity.

Not all observers see fragmentation as inevitable. A December 2024 essay proposed a coexistence architecture: standardize application programming interfaces for foundation models, build an abstraction layer that insulates applications from upstream change, and use adjudication systems to compare outputs from trusted and untrusted models, gating those deemed risky. These measures would reduce migration costs and preserve choice. The BIS study pointed to similar interventions at the infrastructure level: multi-cloud strategies, common APIs, and duties that prevent coercive tying. Together, these measures would lessen the incentive to build thick sovereign stacks, provided the rules of the game remain predictable.

Coalition management turns on this point. Controls require frequent updates to keep pace with hardware, architectures, and evasion. Partners align when they believe rules rest on principle and will hold. They hesitate when last quarter’s exception looked like a deal. Once allies doubt stability, they demand carve-outs, delay alignment, or pursue sovereign hedges of their own. Each step weakens the speed and reach of the next control round. The consequence is not a collapse but a thinning of the regime.

Counterarguments within the record carry weight. One holds that exporting inference chips keeps China dependent on the American stack, where access can be conditioned. Another points to U.S. advantages in cloud and models that could contain fragmentation if mobility is supported. These claims are not dismissed but set against contrary facts: Chinese firms have pursued substitution despite lower yields, resumed exports ease pressure while adding capability, and partners react to visible instability by hedging. A 2023 review of global governance added a broader warning: regulation lags technology, democratic states must coordinate to avoid a patchwork, and authoritarian uses will exploit gaps.

Tariff politics show how industrial and security aims blur. The Section 232 investigation reached beyond chips to products across the economy. Carve-outs were floated for firms pledging domestic investment. Allies and industries protested the risks to supply chains. The issue was not the aim of rebuilding manufacturing but the instrument chosen. When export controls are treated as bargaining tokens, their distinct legitimacy erodes. Coalition partners see instability, and rational states hedge.

The logic of sovereign AI is therefore clear. It insulates against policy volatility by securing domestic control over data and deployment. It insulates against fragility by diversifying dependence and keeping models close. It does not imply hardware independence. Most sovereign projects will remain reliant on American technology. The thickness of sovereignty varies with the stability of rules: when rules wobble, the hedge thickens; when rules hold, reliance remains thin. The tools to reduce pressure are available. Standards, abstraction, adjudication, multi-cloud—each lowers migration costs and preserves mobility. Each requires credible rules.

The paradox that opened this essay reads, in the end, as a choice. To preserve advantage, the United States must preserve credibility. To preserve credibility, it must keep export controls off the trade table. It must adapt thresholds to where deployment risk now lies, in inference at scale. It must invest in the connective tissue of mobility so that partners can align without locking themselves in. Or it can continue to price exceptions, teaching partners to diversify and adversaries to press. One path slows fragmentation; the other hastens it. The distinction does not rest on slogans but on how rules are applied, revised, and kept apart from bargaining. The difference will show not in headlines but in whether coalitions still move when the next update is due.


Sources

Foreign Policy, “Trump’s Trade Tactics Come for Chip Controls,” Aug. 13, 2025.

Foreign Policy, “The Nvidia Chip Deal Trades Away the United States’ AI Advantage,” July 22, 2025.

Foreign Affairs, “What If China Wins the AI Race?,” June 13, 2025.

Bank for International Settlements, The AI supply chain (BIS Papers No. 154), March 2025.

Foreign Affairs, “The Real Stakes of the AI Race,” Dec. 27, 2024.

Tilovska-Kechedji, “Navigating the Geopolitical Landscape of Artificial Intelligence,” 2023.


Previous
Previous

How Law Prices Politics

Next
Next

Why Robotaxis?