AI contract restrictions could threaten military missions, US official says

By
Follow google news

Fears provider could halt usage at any time.

A senior Pentagon official said that ⁠commercial AI ⁠contracts signed under the Biden administration contained sweeping operational restrictions that threatened to paralyse US military missions in real time, including the ability to plan and execute combat operations.

AI contract restrictions could threaten military missions, US official says

Emil Michael, undersecretary of defense for research ‌and engineering, described a moment of alarm when he reviewed ‌the ‌terms governing AI models already embedded in some of ‌the military's most sensitive commands.

He did not name ⁠the AI provider whose contracts he was reviewing.

His comments came at the American Dynamism Summit in Washington, a gathering of technology companies keen on space and national security work.

The summit occurred just days after a disagreement ​over how the Pentagon could use Anthropic's powerful and widely used AI tools, leading President Donald Trump to ban the startup from ⁠government business and label it a national security risk.

"I had a 'holy, holy cow' moment," Michael said at the American Dynamism Summit in Washington.

"There were things ... you couldn't plan an operation ... if it would potentially lead to kinetics" or explosions.

He described dozens of restrictions baked in to agreements covering commands responsible for air operations over Iran, China and South America.

Michael said the contracts were structured in a way that, if an operator violated the terms of service, the ​model could theoretically "just stop in the middle of ⁠an operation."

Anthropic's Claude had been the only AI ⁠model available to the US Defense Department on its classified systems at the time Michael conducted his review.

His concerns ​sharpened after a senior executive at an unnamed AI company raised questions about ‌whether its software ⁠had been used in what Michael called one of the most successful military operations in recent memory.

Anthropic's Claude was reported to have been used to help plan the US government raid that captured ‌former Venezuelan President Nicolas Maduro in January.

"What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed," Michael said.

The disclosures may help explain the dispute ​between Anthropic and the US Department of Defense.

US Defense secretary Pete Hegseth declared the company a "supply-chain risk" for its refusal to back down in negotiations over restrictions on autonomous weapons and mass surveillance.

Hours later, ‌rival OpenAI struck ⁠its own deal with the ​Pentagon. A statement by OpenAI CEO Sam Altman suggested that the department had agreed to similar restrictions with OpenAI's ​models.

Got a news tip for our journalists? Share it with us anonymously here.
Tags:

Most Read Articles

How CBA unlocked 90 percent of its customer and transaction data

How CBA unlocked 90 percent of its customer and transaction data

Telstra pushes forward with agentic AI plans

Telstra pushes forward with agentic AI plans

Services Australia describes fraud, debt-related machine learning use cases

Services Australia describes fraud, debt-related machine learning use cases

WiseTech Global plans 2000 job cuts in software and operations

WiseTech Global plans 2000 job cuts in software and operations

Log In

  |  Forgot your password?