Random
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

Anthropic Vs the Paranoid Drunkard in the Pentagon

American artificial intelligence company Anthropic is thus far refusing to comply with the Pentagon’s demands or risk being designated a “supply chain risk” — a label typically reserved for companies tied to foreign adversaries.

The Pentagon, which uses Anthropic’s Claude AI system on its classified networks, wants broad authority to use it for “all lawful purposes.” But Anthropic has two red lines for the Pentagon: no use in autonomous weapons and no mass surveillance of US citizens.

The Defense Department claims that it has no interest in using AI for either purpose and that it needs the freedom to use the technology it is licensing (which it has within the bounds of those two caveats).

Anthropic, however, said Thursday that it has no intention of dropping its conditions. It’s not like Paranoid Kegsbreath is a man of his word. And the fool does even understand the very basics of OPSEC.

Stand your ground, Anthropic — Americans don’t need or want the Paranoid Drunkard to have another toy to murder boaters in international waters or to spy on them!
Top | New | Old
Northwest · M
It's been resolved. Kind of.

Amazon, Nvidia and Softbank, gave OpenAI's Sam Altman $110B overnight, and he's marching in now proposing that the Pentagon forget about Anthorpic, and with the $110B, he can give the Pentagon what they want, no holds barred.

 
Post Comment