Pentagon Turns to Ex-Uber Executive in Anthropic Feud Over AI — news
News/2026-03-08-pentagon-turns-to-ex-uber-executive-in-anthropic-feud-over-ai-news-news
Breaking NewsMar 8, 20264 min read

Pentagon Turns to Ex-Uber Executive in Anthropic Feud Over AI — news

Pentagon Taps Ex-Uber Executive Amid Feud With Anthropic Over AI Use

WASHINGTON — The Pentagon has enlisted Emil Michael, a former top Uber executive known for aggressive dealmaking, to help navigate a growing dispute with Anthropic over restrictions on its Claude AI model, particularly regarding military applications like autonomous weapons.

The conflict escalated after President Donald Trump ordered federal agencies to immediately stop using Claude, while granting the Pentagon a six-month grace period to phase out the technology from classified systems, including those involved in the Iran conflict. Anthropic has maintained that it only sought to limit its AI for two specific purposes: mass surveillance of Americans and fully autonomous weapons. The feud highlights tensions between AI safety policies and national security needs.

Michael, who served as Uber’s chief business officer during its rapid global expansion, revealed details of months-long negotiations with Anthropic CEO Dario Amodei during an appearance on the “All-In” podcast alongside venture capitalists Jason Calacanis, David Friedberg and Chamath Palihapitiya. His involvement signals the Defense Department’s push for a pragmatic approach to securing AI capabilities despite corporate restrictions.

Clash Over Military AI Applications

The dispute centers on Anthropic’s usage policies for Claude, one of the leading large language models. According to multiple reports, Anthropic sought to prohibit its technology from being used in fully autonomous lethal weapons systems and mass domestic surveillance.

Pentagon officials, including Chief Technology Officer Michael, clashed with the company over these limitations. The military has deeply integrated Claude into classified systems, making an abrupt cutoff operationally challenging. The six-month transition period ordered by the Trump administration reflects the embedded nature of the AI across defense operations.

Michael’s Silicon Valley background as an aggressive negotiator during Uber’s battles with regulators positions him as a strategic choice for the Pentagon in these talks. His podcast comments provided the most detailed public account yet of the negotiations with Amodei.

Broader Context of AI and Defense

The feud illuminates deeper ethical questions about the militarization of advanced AI. As reported by The Guardian, the conflict between Anthropic and the U.S. military highlights fault lines between commercial AI developers’ safety principles and the operational demands of modern warfare.

Anthropic, founded by former OpenAI executives including Dario Amodei, has positioned itself as a leader in “constitutional AI” and responsible development practices. Its stance on restricting certain military uses aligns with growing concerns among some AI labs about lethal autonomous weapons systems, often referred to as “killer robots.”

The Pentagon’s response underscores the Defense Department’s determination to maintain access to cutting-edge commercial AI despite these corporate policies. Defense Secretary Pete Hegseth has been involved in the high-level discussions alongside Amodei.

Impact on AI Industry and National Security

For AI developers, the dispute raises questions about the viability of imposing strict usage restrictions on foundational models when dealing with government customers. Companies like Anthropic, OpenAI, and Google DeepMind have all implemented various safety policies, but few have faced such direct pushback from the world’s largest military power.

The situation could influence how other AI firms structure their government contracts and acceptable use policies. It also highlights the strategic importance of AI to U.S. defense strategy, particularly as competition with China intensifies in artificial intelligence capabilities.

For the Pentagon, successfully transitioning away from Claude while maintaining operational effectiveness will test its ability to diversify AI suppliers and potentially develop more military-specific models less constrained by commercial policies.

What’s Next

The six-month window for the Pentagon to phase out Claude creates an urgent timeline for identifying and integrating alternative AI systems. Whether the Defense Department will pursue modified agreements with Anthropic, shift to other commercial providers, or accelerate internal development remains unclear.

Michael’s continued involvement suggests negotiations may still be active despite the Trump administration’s directive. The outcome could set important precedents for future relationships between frontier AI labs and national security agencies.

The episode also adds to ongoing debates about export controls, AI safety, and the appropriate boundaries between commercial AI development and military applications.

Sources

Original Source

bloomberg.com

Comments

No comments yet. Be the first to share your thoughts!