Get Updates
Get notified of breaking news, exclusive insights, and must-see stories!

US Used Claude AI Through Palantir Partnership In Classified Venezuela Raid That Led To Maduro’s Arrest

The United States military relied on artificial intelligence during last month's operation in Venezuela that led to the capture of former president Nicolas Maduro, according to a report by the Wall Street Journal. The mission marked the first known instance of Anthropic's AI model being used in a classified Pentagon operation.

US Used AI for Maduro Capture
AI Summary

AI-generated summary, reviewed by editors

The U.S. military employed Anthropic's Claude AI model in a classified operation in Venezuela that led to the capture of Nicolas Maduro, marking the first known instance of its use in a Pentagon mission. Despite Anthropic's usage policies prohibiting violence, the AI's involvement raised concerns, potentially leading to contract cancellations and tensions between AI firms and defense officials.

First Classified Use of Anthropic's AI

The report states that Claude, developed by Anthropic, was utilised during the mission that resulted in Maduro and his wife being captured in Caracas after multiple sites were bombed. Anthropic thus became the first AI model developer to see its system used by the Pentagon in a classified setting.

The deployment came through a partnership arrangement involving Palantir Technologies, whose data tools are already integrated into Defence Department systems. Because of that existing relationship, Claude was made accessible within classified environments via third parties.

Military adoption is widely seen as a significant credibility milestone for AI companies. Securing defence contracts enhances legitimacy and strengthens investor confidence in an intensely competitive sector where valuations are often driven by future potential.

Usage Policies and Ethical Concerns

Anthropic's publicly stated guidelines prohibit Claude from facilitating violence, developing weapons or conducting surveillance. Despite these restrictions, the AI system was reportedly involved in the Venezuela operation.

Responding to questions, an Anthropic spokesperson said: "Any use of Claude-whether in the private sector or across government-is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance".

The company's internal concerns over how the tool was being used have reportedly prompted US officials to consider cancelling contracts worth up to $200 million, according to the Wall Street Journal.

Tensions Between AI Firms and Defence Officials

Anthropic Chief Executive Dario Amodei has consistently advocated for tighter regulation and stronger safeguards around artificial intelligence. He has publicly warned against AI being used in autonomous lethal operations and domestic surveillance, both of which remain contentious elements in the company's relationship with the Pentagon.

In January, Defence Secretary Pete Hegseth signalled a firm stance on the issue. He announced that the Pentagon would not collaborate with AI models that "won't allow you to fight wars", referencing discussions with Anthropic.

Anthropic had signed the $200 million defence contract last summer, placing it among several technology firms building tailored AI tools for the US military.

Expanding AI Role in Defence Networks

Many AI companies are now designing customised systems for the US armed forces, although most operate only on unclassified administrative networks. Anthropic stands out as the only developer whose model is accessible in classified settings through third parties. Even so, the government remains bound by the company's usage policies.

The episode highlights the growing but uneasy partnership between Silicon Valley and Washington, as defence agencies seek advanced AI capabilities while technology firms attempt to balance commercial ambition with ethical guardrails.

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+