What "Sovereign AI" Means for U.S. AI Exports
On several occasions, the White House has described “AI sovereignty” as an intended deliverable of the American AI Exports Program (AAEP). For example, Michael Kratsios, the Director of the White House Office of Science and Technology Policy (OSTP), recently stated in congressional testimony that one of the goals of the AAEP “is for U.S. companies to provide modular AI stack packages that empower countries to develop sovereign AI capabilities with American technology."1
What this means in practice will be defined, at least in part, by the companies that aim to participate in the AAEP. Some of these companies addressed AI sovereignty (and related concepts such as cloud sovereignty) in their responses to the Commerce Department’s October 2025 request for information, which solicited feedback on how the AAEP should operate.
Here’s an analysis of the comments of companies that responded to the RFI and explicitly addressed sovereignty issues in their submissions.2
The “sovereign AI six”
Across the AAEP comments that explicitly invoke sovereignty, the concerns seem to cluster around six areas:3
Data control — where sensitive data is stored and processed, who can access it, and whether the system provides an audit trail;
Compute control — whether partners get dedicated compute capacity and meaningful control over how the infrastructure is run (including the option to own and operate it);
Model control — what rights customers have over model choice and customization (including fine-tuning), and whether model weights are shared with customers or remain vendor-controlled;
Deployment control — locally hosted inference and the ability to deploy on-prem, cloud, or both, including air-gapped options for sensitive environments;
Assurances — auditability and security/compliance controls, plus (where relevant) legal/political reassurance tied to sovereignty concerns (including foreign government access concerns); and
Continuity and exit risk — cutoff or “kill switch” fears, service continuity, and whether partners can keep operating or switch providers without losing capability.
What each company said about sovereignty
Only companies that themselves used sovereignty language are included below.
OpenAI (PDF download). OpenAI wraps sovereignty into an “AI Infrastructure Package” whose purpose is “[s]overeign compute and secure cloud,” paired with “[s]afety-aligned foundation models and model management.” In other words, OpenAI is packaging sovereignty as a product consisting of computing power and a security framework, designed to be replicated across partnerships.
AWS (PDF download). AWS treats sovereignty as something the AAEP should productize across AI and cloud delivery options, arguing that proposals should span “standard cloud service delivery models to specialized sovereignty configurations.” It also makes sovereignty concrete by focusing on partner control features, including “data residency options that address partner sovereignty requirements.” Finally, AWS flags that sovereignty is not just technical but also a development finance issue, urging “Congress to update the Development Finance Corporation’s charter to allow support for sovereign projects…”
Google (PDF download). Google uses sovereignty language largely as a warning about exclusionary procurement and localization policies. It flags “discriminatory cloud sovereignty rules and procurement requirements” and separately points to “forced data localization.” In this filing, sovereignty is less a design objective than a competitiveness problem. The risk is that “sovereignty” becomes a policy lever used to restrict U.S. market access rather than a narrowly tailored security requirement.
IBM (PDF download). IBM treats sovereign AI as a deployment constraint that AAEP packages must satisfy across different environments. It argues that partners will require “multi-model, multi-cloud, and hybrid-cloud deployment capability,” which it describes as “fundamental to meeting the diverse needs of partners…whether in public clouds, private datacenters, or emerging sovereign AI environments.” IBM then links that requirement to program design. It urges prioritizing “openness, competition, and interoperability” and centering the AAEP on “technology-agnostic products and services that maximize choice for potential partners,” rather than on architectures tied to a specific infrastructure or cloud environment.
Microsoft (PDF download). Microsoft addresses sovereignty as a trust issue that creates friction between it and its customers (and, by implication, between the United States and foreign countries). A number of Microsoft’s customers worry about the risk of being cut off and government access concerns as much as they worry about performance. Accordingly, its most operational ask is continuity assurance: “Clear U.S. commitments not to invoke any ‘kill switch’ against trusted partners.” It then ties sovereignty to lawful-access worries, recommending a diplomatic push by the U.S. to address “digital sovereignty concerns,” and specifically urging the U.S. to “expand CLOUD Act Agreements.”
Dell (PDF download). Dell is explicit in trying to include sovereignty in the AAEP’s design. It proposes that evaluation should include “data residency pathways; sovereign deployment options.” It also treats sovereign deployment models as practical reference architectures, including “Sovereign hosted inference.” On the AI-specific layer (not just infrastructure), Dell points to governance mechanisms like “model weight controls” and preserving options for “sovereign finetuning.”
HPE (PDF download). HPE defines sovereignty in the language of regulated and public-sector operating constraints. It emphasizes controlled environments where “security, data sovereignty, and operational control are critical,” and frames sovereign AI not as a niche European preference but as something that’s in demand in many key regions, citing “significant interest in sovereign AI infrastructure.” The implication is that AAEP packages need to include credible on-prem offerings, rather than just public cloud packages.
AMD (PDF download). AMD treats sovereignty as a question of ownership and operational control. It argues that partners will evaluate AAEP offers against “sovereign AI goals,” and that openness/interoperability will serve as differentiators for buyers “seeking sovereign AI infrastructures.” AMD then argues that the AAEP should enable “the full sovereign ownership of foreign AI system buildouts,” warning that sovereignty-incompatible program structures will reduce uptake and drive buyers toward non-U.S. options.
A nod to Miles’ Law
Generally speaking, a company's views on AI, cloud, and other forms of digital sovereignty depend on its business model. So it’s probably no coincidence that the most sovereignty-forward submissions come from the hardware and infrastructure layers.
Meanwhile, the cloud companies have a broader range of views on sovereignty, depending on their market position and various business and geopolitical pressures they face (though, fundamentally, the major cloud companies are in a position to offer sovereignty solutions like air-gapped, on-prem deployments).
The AI labs — here OpenAI — offer an interesting hybrid view that basically reads like “sovereign compute + secure cloud,” paired with “safety-aligned foundation models and model management,” plus “secure inference hosting” and a governance/security support layer.
As a rule (and absent other factors), sovereignty demand tends to reallocate value away from centralized managed services and toward deployable, controlled, locally operated systems. That’s perhaps the clearest dividing line in sovereign approaches by different companies engaging in this space.
Mapping company filings to the sovereign AI six
To make the docket comparable, I coded each filing against the six categories above:4
E = explicitly tied to sovereignty language in the filing
P = present but indirect or not fully developed
B = sovereignty framed mainly as a barrier
— = not materially addressed
Growing alignment around AI sovereignty?
Stepping back it does seem remarkable that both the Trump administration and a significant number of major American tech firms are now treating sovereignty as a core adoption requirement for AI export packages rather than a niche European focus or a trend to discourage.
But even though there’s growing alignment around the abstract notion that AI sovereignty should be supported (or at least accommodated), the details of what “sovereign AI” means generally — much less in the context of the AAEP — haven’t yet been determined. The only thing that seems clear is that there’s no shared definition of what “supporting” sovereign AI actually means.
For some companies, sovereign AI is an issue of configurations and controls. For others, it’s a governance issue. For one, it’s a trust and geopolitical relationship challenge stemming from kill-switch and government-access-to-data issues. And for another, “sovereignty” should at least sometimes be seen as a tool of discrimination against American tech companies.
What comes next?
At a high level, OSTP has reframed sovereign AI as a U.S. export objective rather than treating it simply as a protectionist irritant to be fought. That’s a meaningful shift in posture and one that the RFI comments suggest U.S. companies and their foreign partners have noticed.
Now the task is implementation. Commerce has closed the RFI phase and is moving toward the early-2026 call for consortium proposals. Before that solicitation is finalized, Commerce should invite comments focused specifically on how the AAEP should handle sovereign AI. The goal would be to bring clarity to the sovereignty issues the docket already surfaces and to ask what concrete products and features should count for each. Among other things, doing so would help make sovereignty claims comparable across proposals, bring clarity to what is meant by “sovereign” in the AAEP, and help Commerce better pinpoint which sovereignty requirements are compatible with U.S. security and economic objectives.
This won’t resolve the questions of trust involving matters like service continuity guarantees, data access, and export controls that are more in the hands of the U.S. government than of American companies. Those require executive branch and congressional attention in addition to effective diplomatic engagement.
Still, a targeted comment round at Commerce would give evaluators a common framework for comparing sovereignty claims across proposals. A parallel effort at OSTP could map how sovereign AI intersects with the fluid dynamics of AI geopolitics more broadly and, perhaps, help address the government-centered trust issues outlined above.
The bigger picture
Taken together, the sovereignty-referencing corporate comments on the RFI are doing two things at once.
First, they sketch a roadmap for what the AAEP’s “modular AI stack packages” could look like in the real world. As reflected in the “sovereign AI six” list, this includes data-control features that make residency and auditability credible and deployment options that work in controlled environments (sovereign hosted inference, on-prem, air-gapped, etc.). In other words, the comments describe the products and features that an exportable sovereignty product catalog could potentially include.
Second, the comments surface a set of policy issues that go beyond the AAEP and speak to the deeper question of trust in and reliance on the United States and U.S. technology. The "kill switch" and lawful-access points stand out here. More broadly, calls for "sovereign AI" are often shorthand for expressing dependency concerns and fears about withdrawal of access or loss of operational control. In that sense, the AAEP docket has become a feedback loop through which foreign partners' growing concerns about the U.S. government’s use of its leverage are communicated back to Washington. The companies are, in effect, reporting what they hear from customers who increasingly see American influence — including via its companies’ technology — as a risk to be managed if not altogether avoided.
Conflict of Interest Disclosure: The views I share in this essay are my own. Although I hold professional affiliations and advisory roles with several organizations, I don’t receive compensation or instructions from them (or any other entities) regarding the views I express in essays, panels, or other public and semi-public forums unless otherwise stated in a specific piece or setting. In some cases, I might receive an honorarium from the publisher as payment.
Here’s my methodology. First, I started this out of curiosity to see whether companies were addressing sovereignty issues, and my casual read snowballed from there into a more structured (though imperfect) analysis. I found these companies’ responses by searching the names of companies I assumed might say something on sovereignty + keywords like “sovereign” and “sovereignty.” There may be several other companies that focused their RFI responses on AI sovereignty and perhaps expressed very different views from those stated by the companies covered here. I’ll also note that it’s tough to discern a company’s POV on digital sovereignty issues just from one filing, so it’s quite possible that some of these companies’ more complete views diverge from the statements in their filings (footnote to my footnote: the substance of these filings often depends on whether they were driven by sales, policy, legal, or other teams).
Yes, I’m calling this list the “sovereign AI six.” I’ve made my peace with it.
These are my qualitative judgments based on the filings, not a third-party or “objective” scoring system of the companies’ overall postures on sovereignty issues.


