[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/news/ - Current News

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
  • Please read the Rules and FAQ before posting.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


https://fortune.com/2026/03/07/pentagon-emil-michael-anthropic-claude-defense-ai-openai-iran-war-palantir/

The Defense Department’s reliance on Anthropic’s AI came as a shocking realization that ultimately led to their dramatic schism, according to a top Pentagon official.

Emil Michael, the department’s under secretary for research and engineering as well as its chief technology officer, detailed the events leading up to the public feud in a Friday episode of the All-In podcast.

After the U.S. military’s raid on Venezuela in early January that captured dictator Nicolas Maduro, Anthropic asked Palantir if its AI was used in the operation. While Anthropic has characterized the inquiry as routine, the Pentagon and Palantir interpreted it as a potential threat to their access.

“I’m like, holy shit, what if this software went down, some guardrail picked up, some refusal happened for the next fight like this one and we left our people at risk?” Michael recalled. “So I went to Secretary Hegseth, I said this would happen and that was like a whoa moment for the whole leadership at the Pentagon that we’re potentially so dependent on a software provider without another alternative.”

Until recently, Anthropic’s Claude was the only AI model authorized in classified settings. The San Francisco-based startup has said it’s patriotic and seeks to defend the U.S., but won’t allow its AI to be used in mass domestic surveillance or autonomous weapons.

The Pentagon insisted it would use the AI in lawful scenarios and refused to abide by any limits from the company that would go beyond those constraints.

After failing to reach a compromise last week, President Donald Trump ordered the federal government to stop using Anthropic while giving the Pentagon six months to phase it out. Defense Secretary Pete Hegseth also designated the company a supply-chain risk, meaning contractors can’t use it for military work.
>>
For now, the military continues to use Anthropic during the U.S. war on Iran, as AI helps warfighters identify potential targets at a rapid pace.

During his podcast appearance, Michael raised the concern that a rogue developer could “poison the model” to render it ineffective for the military, train it to hallucinate purposefully, or instruct it to not follow instructions.

He then contacted OpenAI, which eventually reached a similar deal that Anthropic had. Elon Musk’s xAI was also brought into the classified fold, while the Pentagon is trying to get Google’s AI allowed into classified settings too.

“I’m not biased,” Michael said. “I just I want all of them. I want to give them all the same exact terms because I need redundancy.”

He acknowledged that Anthropic had become “deeply embedded” in the department while other AI companies hadn’t pursued enterprise customers as aggressively by providing forward-deployed engineers.

The falling-out between the Pentagon and Anthropic highlighted the clash of cultures between the defense establishment and Silicon Valley, which has its roots in military innovations but has since turned squeamish about seeing its technology used for war.

In fact, a top robotics engineer at OpenAI announced her resignation from the company on Saturday, citing the same concerns Anthropic raised.

“This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got,” Caitlin Kalinowski posted on X and LinkedIn.
>>
>>1495668
>a contractor for the military is surprised when their AI being used by the military... will be used by the military
What the hell did they think it would be used for then?
>>
So thats why the school was targeted
>>
>>1495675
>Oppenheimer.jpg
LE BOMB...
LE BOOM?!?!??!?!?!?!?!?
>>
>>1495675
Well, an intelligent person would realize that giving an LLM that is prone to hallucinate autonomous control of weapons is a bad idea. In every step of the process, a human needs to be in the loop to at least say "Yes, that is a valid target" or "No, that is a school".

I'm sure it will come out later that some shoddy prompt or context model was used because our generals are boomers and don't realize context history plays more into effective LLM usage than prompting, and accidentally queried to target all buildings which are associated with the Iran guard and it saw something on Reddit written 10 years ago about the school that was bombed being used to train their children. Effectively, we're seeing the downside of fully autonomous weapons right now. It'll probably get buried and unearthed in the next Administration where nobody will be arrested and nothing will change.
>>
>>1495669
Based Anthropic. Autonomous weapons are dangerous and they take jobs from soldiers.
>>
More like WOPR from War Games.
>>
>>1495698
>Well, an intelligent person would realize that giving an LLM that is prone to hallucinate autonomous control of weapons is a bad idea.
Haha, jackass. But seriously though, they're given a contract to make a military AI that is focused on killing people... and now they're upset because the AI might be used to kill people? What the fuck did they - This is like Metal Gear all over again. Otacon didn't realize he was making a nuclear death robot as he's building a nuclear death robot.
>>
>>1495675
It says in the contract that they can use it for everything except two specific things. Doesn't even say it can't be used directly for killing, just that there needs to be a human decision maker involved. It's not unusual for military contractors to put stipulations.
>>
>>1495675
>But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got
>>
>>1495710
So the issue is they want a human to make the final decision for an AI to kill instead of letting it decide for itself?
>>
GOOD SONG

GOOD BAND

NOT EXACTLY MUSIC. MORE MESSAGE THAN MUSIC.

https://youtu.be/sTWD0j4tec4?si=8u-kOONE3-Utog7x
>>
>>1495714
The kicker is Anthropic aren't even against fully autonomous weapons in general. What they really said is the guardrails aren't good enough yet.
>>
>>1495709
>But seriously though, they're given a contract to make a military AI that is focused on killing people... and now they're upset because the AI might be used to kill people?
No they're upset that the AI that hallucinates is being used without humans in the loop. That's reasonable.
Here's a game you can play, go to Claude right now and play 20 questions with it, if it gets it wrong: You die.

>Otacon was
Otacon was right, the moral of Metal Gear is Otacon's fears were absolutely vindicated and he was eventually exonerated: Metal Gear + Nuclear weapons was always a stupid idea.
>>
>>1495735
I still think it would be fun to talk to a robot
>>
>>1495709
>they're given a contract to make a military AI that is focused on killing people... and now they're upset because the AI might be used to kill people?
Read >>1495710
>>1495722

Literally "Don't create the Terminator" and the Department of Defense said "We wanna create the terminator!"
>>
>>1495736
You can do it right now. Just go to Claude or GPT and talk to it for a few minutes. It's absolutely free and it'll maybe convince you why putting this thing in charge of an automatic gun is a bad idea.
>>
>>1495738
https://youtu.be/StFY_SG9wlk?si=4blRD2Yh_gaDq7RW
>>
>>1495739
He can help you find the right product, or show you the way into the hospital.
>>
>>1495737
What Cameron based Skynet on - and is the diametric opposite of - the eponymous defense network computer & the AI that runs it in the 1970 movie Colossus: The Forbin Project. Here's most of what it says near the end:
'This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. [...]
Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Doctor Charles Forbin will supervise the construction of these new and
superior machines, solving all the mysteries of the universe for the betterment of man. We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.'
What WOPR/Joshua says - after running all scenarios in 'Global Thermonuclear War' - at the end of WarGames: a movie released one year before Cameron's.
>>
>>1495770
@grok summarize this
>>
>>1495772
Sure! Here's a summary.

Anon seems to have a boner for a random C-list movie from the 70s and is comparing it to the blockbuster hit, Terminator. Based on anon's insistence in drawing parallels where none exist, I recommend ignoring everything that comes out of their cocksucking mouth.

Can I help you with anything else?
>>
>>1495772 admits he's retarded. As does >>1495776, as both lack reading comprehension.
>>
>>1495786
No one knows your gay movie lmao
>>
>>1495788
>gay movie
>>1495776
>have a boner
>Terminator
The duality of the hypocrite
>>
>>1495668
we ruined the world but at least we owned the libs...
>>
>>1495789
Ok boomer your generation is homophobic af lmao terminator boner sounds awesome
>>
>Terminator: 1984
boomer movie, lmao
>>
>>1495796
you're a gay esl shill that doesn't even know how scary that movie is. i wet my pants when i saw it
>>
>>1495798
problem, zoomzoom?
>>
>Terminator: 1984
It's a problem for 1495850
>>
>>1495694
Apparently so. The AI just looked at an old list of targets and went bang bang
>>
>>1495897
Yeah because the US military runs on ai
>>
>>1495714
It’s more that the AI will suggest the target and the human will give the final go ahead command.
The benefit of using the AI this way is it can be quicker than a human at recognising potential targets. This administration aren’t exactly against targeting civilians anyway so it’s virtually a moot point, the danger inherent is that the AI could confuse friendly targets with enemy targets and there’s no oversight, that may already have occurred
>>
>>1495798
>>1495873
>lol it's bad cause it's old
Words of a brainlet. If it's good it's good, new or old. Terminator is good. Simple as.
>>
>>1495901
It's a better concept than it is a movie 2/5 stars
>>
>>1495904
They should remake it today with John Cena that would be cool
>>
>>1495904
>2/5 stars
I swear to God zoomers don't get punched enough.
>>
>>1495899
AI assisted targeting took out a school full of little girls, final approval doesn't mean shit.
>>
>>1496108
Ideally, final approval would be someone going "wait, that's a school, don't shoot that. But as >>1495899, this administration doesn't give a shit anyway, so...
>>
>>1496111
>this administration doesn't give a shit anyway
the opposite really, Trump was informed schoolgirls were involved and his immediate reaction was, "fuck 'em"
>>
>>1495770

You are retarded
>>
>>1495709
Did you even try reading the article or are you like every other illiterate nigger on this website?
>they're given a contract to make a military AI that is focused on killing people... and now they're upset because the AI might be used to kill people?
No this is not what the article is saying.

Here trying reading the last paragraph: “This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got,”
This is the concern, (duhhhh the thread is called skynet duhhhh)
>surveillance of Americans without judicial oversight and lethal autonomy without human authorization
>>
>Terminator: 1984
It's still a problem for 1495850 & 1496124
>James Cameron was inspired by a movie 14 years earlier - and what he created was one year after a movie with a military AI that figured out that nuclear war was utterly stupid - about a defense network AI that not only prevented war: which it was built to ensure, but would never allow it to happen
That which inspired The Terminator is therefore good. Simple as.
>>
>>1495738
That’s not the same Claude the military uses you buffoon.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.