US Rejects Global AI Governance at Delhi Summit, Calls for Decentralized Approach
At the AI Impact Summit in Delhi, the United States delegation, led by White House technology adviser Michael Kratsios, has firmly rejected calls for global governance of artificial intelligence. This stance contrasts sharply with other world leaders and tech executives, including Google DeepMind's Sir Demis Hassabis and OpenAI's Sam Altman, who advocated for urgent, coordinated regulation to address AI's serious threats. The summit highlights a growing international divide on how to manage the rapid advancement of AI technology, balancing innovation against risks like autonomous systems and misuse by bad actors.
The global conversation on artificial intelligence regulation reached a critical juncture at the AI Impact Summit in Delhi, where a fundamental divide emerged between major powers. While many tech leaders and politicians called for increased international cooperation and governance, the United States delegation delivered a clear, opposing message. White House technology adviser Michael Kratsios, head of the US delegation, stated unequivocally: "We totally reject global governance of AI." This declaration underscores a pivotal debate on whether the future of this transformative technology should be shaped by centralized, global rules or through decentralized, national approaches that prioritize innovation and adoption.

The Call for Urgent Action and Smart Regulation
In contrast to the US position, prominent figures in the AI industry emphasized the need for swift regulatory frameworks. Sir Demis Hassabis, the boss of Google DeepMind, told the BBC in an exclusive interview that more research on AI threats "needs to be done urgently." He advocated for "smart regulation" to address "the real risks" posed by the technology. Hassabis identified two primary dangers: the potential for AI to be weaponized by "bad actors" and the existential risk of losing control over increasingly powerful and autonomous systems. His call was echoed by Sam Altman, CEO of OpenAI, who also urged for "urgent regulation" during the summit proceedings.

The US Stance: Rejection of Bureaucracy and Centralized Control
The American rejection of global governance is rooted in a philosophy that prioritizes technological adoption and innovation over restrictive international bureaucracy. Michael Kratsios elaborated on this viewpoint, stating, "AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralized control." This perspective aligns with the Trump administration's repeated assertions on the matter, signaling a preference for a market-driven, nation-state-led approach to AI development. The US argues that top-down, global governance could stifle the very innovation needed to solve the challenges AI presents, and that safety can be achieved through other means, such as industry standards and bilateral agreements.
International Perspectives and the Search for Consensus
The summit, attended by delegates from over 100 countries including several world leaders, revealed a spectrum of opinions. Indian Prime Minister Narendra Modi emphasized the necessity for countries to work together to harness AI's benefits. Representing the UK government, Deputy Prime Minister David Lammy MP stressed that AI safety was a shared responsibility, requiring politicians to work "hand in hand" with tech companies, with public benefit and security as paramount concerns. Despite these calls for collaboration, the US position creates a significant hurdle for any unified, global statement on AI governance, potentially leading to a fragmented regulatory landscape.

The Path Forward: Guardrails Without Global Government
The debate at the Delhi summit does not dismiss the need for oversight but centers on its form. Sir Demis Hassabis spoke of the importance of building "robust guardrails," a concept that may be compatible with both centralized and decentralized models. The challenge, as Hassabis admitted, is the blistering pace of AI development, which makes regulation inherently difficult. The outcome suggests a future where major AI powers like the US, China, and the EU may develop their own distinct regulatory paradigms, competing not just technologically but also in setting the rules of the game. This divergence could define the next decade of AI, impacting everything from economic competitiveness to global security.



