The Coming Age of AI Governance Blocs
International AI governance isn’t drifting into fragmentation; it’s being built that way.
(Note: On Wednesday, I’m moderating a panel on global AI governance at UC Law San Francisco. This essay is an organized version of my notes for a section of the panel that I’m hoping to focus on competing governance blocs. Thanks to the organizers of the conference for inviting me to participate.)
The race for artificial intelligence advantage among nations — which spans across technology, economics, geopolitics, and other realms — is fundamentally reshaping the trajectory of global AI governance. Rather than fostering international cooperation, the view that AI is essentially a competition is driving countries and blocs to develop separate AI governance frameworks that advance their own strategic interests and visions.1
This shift reflects a kind of bet by countries’ political leaders that strong influence and control over AI technology (including rules and standards) will directly determine key outcomes like national security capabilities and economic competitiveness.
With stakes this high, the coordinated global AI governance that perhaps some hope for has given way to competitive governance, where major powers prioritize their own and their close allies’ rule-making over shared frameworks with broader groups of countries.
Evidence of this trend appears in three recent documents: the G7 Leaders’ Statement on AI for Prosperity, the BRICS Leaders’ Statement on the Global Governance of AI, and the European Union’s International Digital Strategy. These documents signal governance fragmentation, and each bloc’s document represents its own distinct AI governance philosophy. Together, the three documents show that AI governance isn’t drifting into fragmentation; it’s being actively built that way.
Fragmenting global AI governance by design
Efforts at global technology governance have long vacillated between cooperation and fragmentation, influenced by geopolitics, economic incentives, and the distribution of power among nations. AI represents a particularly acute case of this tension. Unlike previous technologies that had slower or more localized impacts (thus allowing for gradual and cautious governance approaches), AI is perceived as deeply, broadly, and rapidly transformative. Policymakers view it as fundamentally reshaping economic growth, national security, and even the nature of state power itself.
This perception makes unified global standards both more critical and more politically fraught. Without a coordinated framework, competing blocs are likely to develop separate ecosystems with divergent technical standards and regulatory requirements. This fragmentation could hinder global cooperation, potentially exacerbating geopolitical tensions and creating barriers to economic integration. Countries may align their AI ecosystems to reinforce existing geopolitical alliances or forge new partnerships, deepening technological divides along geopolitical lines.
Three alternative and competing visions
Three recent documents crystallize this fragmentation.
The G7’s approach to AI governance has undergone a significant shift from its earlier positions. The 2023 Hiroshima AI Process established what was then called “the first international framework” for AI governance, emphasizing “the need to manage risks and to protect individuals, society, and our shared principles including the rule of law and democratic values” while calling for binding “International Guiding Principles on [AI] and a voluntary Code of Conduct for AI developers.” The process aimed to create comprehensive governance frameworks that balanced innovation with risk management and democratic oversight.
However, the G7’s June 2025 statement represents a notable retreat from this governance leadership, likely influenced by the U.S. shift toward competitive positioning over precautionary regulation. The June document focuses almost entirely on economic competitiveness, market enablement, and helping businesses “unlock competitiveness and deliver unprecedented prosperity.” The shift is stark and has gone from governance frameworks that balance innovation with risk management to voluntary norms that seem to treat safety concerns as secondary to competitive advantage, with governance reduced to market facilitation and trust-building rather than constraint or oversight.2
BRICS's July response offers a direct counter-vision: state sovereignty, UN-centered governance, and calls for more inclusive representation that implicitly challenges existing Western-dominated frameworks in pursuit of equitable development. Interestingly, the BRICS statement explicitly calls out the challenges of competing governance visions:3
The proliferation of governance initiatives and the diverging views in multilateral coordination at the international level may aggravate existing asymmetries and the legitimacy gap of global governance on digital matters, further eroding multilateralism as a result.
The EU outlines a third path through its June strategy, utilizing regulatory power and infrastructure diplomacy as tools of strategic influence while positioning itself as the guardian of human-centric AI development.
Enablement vs. sovereignty vs. regulation
These three documents reveal fundamentally different governance philosophies. The G7 advocates voluntary, industry-collaborative frameworks that prioritize economic competitiveness and practical adoption through multistakeholder processes — essentially treating governance as market facilitation rather than constraint. BRICS champions state-led governance through UN-based coordination, emphasizing national sovereignty and equitable representation to counter what it sees as exclusionary Western-dominated frameworks. The EU stakes out a third path, deploying regulatory power and infrastructure diplomacy as tools of strategic influence while positioning itself as the guardian of human-centric AI development.
These approaches reflect deeper geopolitical priorities. The G7 seeks competitive advantage through innovation enablement, BRICS pursues sovereign autonomy and Global South representation, and the EU uses regulatory leadership as an instrument of global influence and strategic autonomy.
Some strategic implications of AI divergence 
The apparent divergence between blocs becomes more complex when examining overlapping memberships. Four G7 members — France, Germany, Italy, and the EU itself — are also involved in the EU’s international digital policy, creating potential tensions between the G7’s retreat from governance and the EU’s regulatory assertiveness. However, this overlap may actually be strategic rather than contradictory.
For European G7 members, the two approaches could function as complementary tools. The G7’s voluntary, market-friendly framework appeals to businesses and innovation-focused partners, while the EU’s regulatory approach provides binding standards and strategic leverage. Countries like France can champion voluntary cooperation in G7 forums while simultaneously using EU regulatory power to shape global standards, essentially offering both carrots and sticks in international AI governance.
This reflects a broader European strategy of refusing to choose between regulation and innovation — what might be called the AI “Draghi Effect” where the EU decides it’s time to build economically competitive AI capabilities while maintaining its regulatory leadership, pursuing a both/and approach rather than the either/or choice.
This dual approach may explain the G7’s governance retreat. If European members can advance binding regulation through EU mechanisms, the G7 can focus purely on competitive positioning without appearing to abandon governance entirely.
For other countries, particularly the US, this represents a significant shift. The EU’s strategy signals the EU’s intent to pursue autonomous digital leadership rather than transatlantic partnership, fundamentally repositioning itself as an independent AI power that will shape global standards through its own regulatory frameworks and infrastructure partnerships, rather than coordinating closely with American approaches.
More broadly, fragmentation could force countries and companies to choose sides, aligning with specific governance ecosystems rather than operating in a unified global market. This balkanization would not only complicate multinational cooperation but could also slow innovation diffusion and create new forms of technological dependency, where access to AI capabilities becomes contingent on geopolitical alignment.
The multipolar AI future?
The broader trend toward geopolitical structuring of AI governance is accelerating, driven by competing visions of how AI should be developed, deployed, and governed globally. This fragmentation appears likely to deepen regardless of specific policy moves by individual countries.
Near-term escalation, acceleration, and intensification?
Over the next two years, we could expect:
Escalated competition in multilateral bodies (e.g., the ITU), with BRICS members — led primarily by China and Russia — pushing aggressively for equitable representation and standard-setting reforms that counter Western-dominated frameworks.
Accelerated EU initiatives such as Digital Partnerships, AI Factories, and regulatory diplomacy, further embedding European standards globally as counterweights to other approaches — including the U.S.’s.
Intensified “standards wars” as different blocs seek to establish their technical specifications and governance principles as global norms.
Major AI companies increasingly navigating (and, in some countries, potentially shaping) these competing frameworks through commercial pressures and strategic partnerships.
Longer-term consolidation of AI governance blocs?
In the long term, this emerging fragmentation could solidify into distinct global ecosystems, each driven by its own governance philosophies and technical standards. A likely scenario is selective interoperability, where strong economic incentives maintain alignment in commercial applications while strategic areas such as security-sensitive applications, critical infrastructure, and frontier AI research remain fragmented along geopolitical lines.
Potential future outcomes for the AI safety institutes
AI safety institutes and similar evaluation bodies could become increasingly politicized rather than serving as neutral global arbiters. For the G7 and EU, they could also become anchors of soft alignment — essentially trusted nodes in voluntary coordination networks. BRICS and other Global South actors could either advocate for new UN-affiliated or regionally governed institutes to ensure sovereignty and balance, or potentially engage with the existing AI safety institute network (currently comprising 11 members (essentially the G7 plus Australia, Kenya, South Korea, and Singapore) while pushing for more inclusive governance and representation within these frameworks. Notably, no BRICS countries are currently members of this network.4
The U.S., the UAE, and the complexity of AI bloc alignment
Recent U.S. developments may accelerate these trends, creating complex dynamics. The forthcoming AI Action Plan’s focus on reducing regulatory burdens and promoting global deployment of U.S. technologies, combined with the transformation of the AI Safety Institute into the “Center for AI Standards and Innovation” (CAISI) with its explicit mandate to “ensure US dominance of international AI standards,” signals a more assertive American approach that prioritizes competitiveness over precautionary governance.
This shift could intensify bloc-based competition while creating particularly complex positioning for BRICS members like the UAE, which has been building significant AI partnerships with the U.S. while simultaneously being part of a bloc advocating for sovereignty-respecting, UN-based governance. The UAE’s dual positioning illustrates how BRICS unity on AI governance may face internal tensions as individual members pursue pragmatic bilateral relationships that don’t align neatly with the bloc’s collective principles.
AI governance competition is influencing and sometimes creating new patterns of international alignment and rivalry. The frameworks emerging from these three blocs will likely drive which countries and companies can access advanced AI capabilities, how technical standards develop globally, and where the centers of AI influence and power will be located.
Conflict of Interest Disclosure: The views I express in this essay are my own. While I maintain professional affiliations and advisory roles with various organizations, I don’t receive compensation or direction from them (or from any other entities) for the views expressed in my essays, unless explicitly stated otherwise in a particular piece. In some cases, I may receive an honorarium from the publisher as compensation for contributing the essay itself.
Depending on one’s point of view, this is a necessary rebalancing to move away from extreme “safetyism,” which, in some ways, has lost its meaning, given that it has ranged from content moderation that stifles speech to constraining model behavior to steer clear of contributing to catastrophic harms. The new emphasis on “security” in the U.S., the UK, and elsewhere seems to focus primarily on catastrophic and existential threats like the use of AI for the production of chemical, biological, radiological, and nuclear weapons. However, there is also an “anti-woke” strain to this effort that threatens to veer from an attempt at content neutrality toward a policing of speech that might simply become the mirror image of “woke” censorship.
The BRICS position contains a fundamental contradiction between calling for unified global governance through the UN, while at the same time demanding that countries be allowed to establish their own regulations within their jurisdictions. This tension likely reflects competing interests within BRICS itself, particularly China’s longstanding advocacy for “cyber sovereignty” (the principle that states should control information flows within their borders). From this perspective, BRICS’s call for “unified” UN-based governance may be less about true multilateralism and more about creating, in some instances, international cover for authoritarian control over domestic AI systems. The emphasis on preventing the dominance of some countries (e.g., G7 countries) while preserving “technological autonomy” suggests BRICS countries want to avoid external constraints on their AI governance while potentially constraining others through international frameworks they can influence or block.


