US military used Claude AI in Iran strikes hours after Trump banned Anthropic: Report – Firstpost

US military used Claude AI in Iran strikes hours after Trump banned Anthropic: Report – Firstpost

  • Post category:World News
Share this Post


Amid the ongoing US and Israeli strikes on Iran, a Wall Street Journal report revealed that the American military used artificial-intelligence software from San Francisco-based startup Anthropic to plan the major airstrike against Tehran.

Amid the ongoing US and Israeli strikes on Iran, a Wall Street Journal report revealed that the American military used artificial-intelligence software from San Francisco-based startup Anthropic to plan the major airstrike against Tehran. What was concerning was the fact that Anthropic’s artificial intelligence tool, Claude AI, was used after Trump directed federal agencies to end the use of the company’s AI systems.

The revelation was made to the American news outlet by people familiar with the matter, who asked to remain anonymous. The sources told the WSJ that commands around the world, including the US Central Command in West Asia, employed Anthropic’s Claude AI tool for tasks such as intelligence assessments.

STORY CONTINUES BELOW THIS AD

The tool was also used for targeting and simulating battlefield scenarios ahead of the operation. Centcom has declined to comment on specific systems in its ongoing operations against Iran. Hence, the use of Claude in such high-stakes missions reflects that the AI tool is already integrated into the US military operations despite the fact that the relations between Anthropic and the Pentagon have deteriorated sharply.

The standoff between the Trump administration and Anthropic

It is important to note that the standoff between the US Department of Defence and Anthropic stems from the terms of using the company’s AI models. Last week, the Trump administration ordered all federal agencies to stop working with Anthropic.

It directed the Pentagon to designate the company a security threat and risk to its defence supply chain. The directive followed contract negotiations in which Anthropic refused to grant the Pentagon the right to use Claude in all lawful scenarios it might require, the Wall Street Journal reported.

The conflict prompted the Department of Defence to secure alternative contracts for AI tools from other developers. In this regard, the Pentagon has reached out ot makers of OpenAI’s ChatGPT and Elon Musk’s xAI models for classified settings. However, military officials and AI experts say fully replacing Claude across all systems could take months.

End of Article



Source link

Share this Post

Leave a Reply