Why is Anthropic at war with the Pentagon? Will Pete Hegseth blacklist the AI firm? – Firstpost

Why is Anthropic at war with the Pentagon? Will Pete Hegseth blacklist the AI firm? – Firstpost

  • Post category:World News
Share this Post


‘You have until Friday to decide…’ This ultimatum may sound like a dialogue from a movie. However, it’s not. The US Secretary of War, Pete Hegseth, has threatened to terminate Anthropic’s contract with the Pentagon by Friday (February 27) unless the AI startup agrees to the Trump administration’s terms of use.

The threat is, perhaps, Anthropic’s biggest crisis in its five-year existence — if it doesn’t relent to Hegseth’s demands, the AI startup not only loses its Pentagon contract but could also be labelled a “supply chain risk,” meaning that no company doing business with the Department of War would be allowed to use Anthropic’s models.

STORY CONTINUES BELOW THIS AD

But why is it that Dario Amodei is sparring with the US Department of War? What’s the beef?

What is Anthropic and how is it linked to Department of War?

Anthropic is an AI firm founded in 2021 by former OpenAI executives. Today, the
Dario Amodei-led company is best known for building Claude, a popular large language model (LLM).

What is very notable and that sets Anthropic apart from other AI giants is that it calls itself a “responsible” developer in the AI landscape. On its website, the company describes itself as a “Public Benefit Corporation” committed to the “responsible development and maintenance of advanced AI for the long-term benefit of humanity”.

Currently, Anthropic is the only AI company to have its model deployed on the Pentagon’s classified networks, through a partnership with data analytics giant Palantir. A senior Pentagon official told CBS News that Grok, which is owned by Elon Musk’s xAI, is on board with being used in a classified setting, and other AI companies are close.

The Claude AI app is seen in the app store on a phone in New York City. The US DefenCe Department used Anthropic’s Claude Ai, via its Palantir contract, to help with the attack on Venezuela and capture former President Nicolás Maduro. AFP

Why is Anthropic at odds with the Pentagon?

At the heart of Anthropic’s battle with the Pentagon is their
AI model, Claude, and its reported use during the US military’s operation to
capture Venezuela’s Nicolas Maduro in January. When the news emerged that Claude had been used, an Anthropic spokesperson had stated that the company “has not discussed the use of Claude for specific operations with the Department of War”.

Anthropic has also sought assurances from the Department of War that its AI system would not be used for mass surveillance of Americans and would not make final targeting decisions in military operations without human involvement.

A source close to the matter was quoted as saying, “Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgement.”

These restrictions have pitted Anthropic against the Department of War. Hegseth has maintained that firms providing the Pentagon with AI models must give it complete freedom to do with them what it likes when used for lawful military actions.

STORY CONTINUES BELOW THIS AD

But that’s not all. Anthropic’s need for AI safeguards has put it in the crosshairs of the Trump administration. Last October, Trump’s top AI adviser, David Sacks, accused Anthropic of “running a sophisticated regulatory capture strategy based on fear-mongering.” He argues that Anthropic disingenuously warns of extreme risks from AI systems in order to justify regulations on the technology with which only it and a few other AI companies can easily comply.

And Hegseth, too, has slammed so-called ‘woke’ AI companies. In January, Hegseth slammed AI systems with ideological restrictions. “Department of War AI will not be woke,” he then said. “It will work for us. We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.”

Anthropic has until 5:01 pm on Friday before the Department of War invokes the Defence Production Act on the company in a bid to compel the use of its models. File image/AFP

How serious has the Pentagon vs Anthropic feud become?

This battle between Hegseth and Anthropic escalated on Tuesday (February 24) with the Pentagon warning that the company’s contract would be cancelled.

At the meeting with Amodei, Hegseth stepped up the ante, vowing to terminate Anthropic’s contract on February 27 if the AI firm didn’t agree to the Pentagon’s terms, reported sources. A senior Pentagon official said that if Anthropic did not “get on board” with the Department of War, the latter would invoke the Defence Production Act (DPA), a law that gives the president authority to oblige companies to do national-security work, as well as labelling Anthropic a supply-chain risk.

STORY CONTINUES BELOW THIS AD

For those who don’t know, being designated a ‘supply chain risk’ is typically reserved for foreign adversarial firms, such as the Chinese-based Huawei.

And, an Axios report on Wednesday stated that Hegseth has asked two
defence contractors — Boeing and Lockheed Martin — about their exposure to Claude. This, the news outlet reported, is the first step toward a potential designation of Anthropic as a supply chain risk.

How will Anthropic be impacted by Pentagon’s move?

Firstly, if the Pentagon terminates its contract with Anthropic, the company would lose out on $200 million, which is a paltry amount for a company that generated $14 billion in revenue in February.

However, deeming Anthropic as a supply chain risk has larger implications. The New York Times has stated that the move could force Anthropic to make its product available for free.

Moreover, it would give Anthropic’s competitors an edge. The Pentagon already has an agreement with Elon Musk’s company xAI to use its artificial intelligence model, Grok, on the classified system. Google, another leading AI developer, is also taking on contracts for classified and unclassified work with the Pentagon, having scrapped restrictions on the use of AI for defence purposes in 2024.

STORY CONTINUES BELOW THIS AD
FILE – Defense Secretary Pete Hegseth has vowed to punish Anthropic for not bending to the Trump administration’s demands. File image/AP

What comes next?

Following the Tuesday meeting, Anthropic said that it had continued good-faith conversations in the meeting at the Pentagon. However, it didn’t mention anything about Hegseth’s threat of invoking the Defence Production Act.

But Anthropic, it seems, has been loosening its own safety standards. The AI company recently released an
updated version of its Responsible Scaling Policy (RSP), stating that the new policy is a reaction to changes in the market environment.

“The policy environment has shifted toward prioritising AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level,” the Anthropic announcement read.

In an interview with podcaster Dwarkesh Patel, Amodei also suggested a likely loosening of safety commitments as the firm faces “commercial pressure”.

Owen Daniels, associate director of analysis and fellow at Georgetown University’s Center for Security and Emerging Technology, also told the AP that the feud with the Pentagon will test Anthropic. “Anthropic’s peers, including Meta, Google and xAI, have been willing to comply with the department’s policy on using models for all lawful applications. So the company’s bargaining power here is limited, and it risks losing influence in the department’s push to adopt AI.”

STORY CONTINUES BELOW THIS AD

With inputs from agencies

End of Article



Source link

Share this Post

Leave a Reply