[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/pol/ - Politically Incorrect


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


A lot of the writing making the case for AI doom is by Eliezer Yudkowsky, interspersed with the expected number of parables, tendentious philosophical asides, and complex metaphors. I think this can obscure the fact that the argument for AI doom is pretty straightforward and plausible—it requires just a few steps and none of them are obviously wrong. You don’t need to think humans are just fancy meat computers or that AI would buy into functional decision theories and acausally trade to buy the argument.

For this reason, I thought I’d try to concisely and briefly lay out the basic argument for AI doom.

The basic argument has a few steps:

We’re going to build superintelligent AI.

It will be agent-like, in the sense of having long-term goals it tries to pursue.

We won’t be able to align it, in the sense of getting its goals to be what we want them to be.

An unaligned agentic AI will kill everyone/do something similarly bad.

Now, before I go on, just think to yourself: do any of these steps seem ridiculous? Don’t think about the inconvenient practical implications of believing them all in conjunction—just think about whether, if someone proposed any specific premise, you would think “that’s obviously false.” If you think each one has a 50% probability, then the odds AI kills everyone is 1/16, or about 6%. None of these premises strike me as ridiculous, and there isn’t anything approaching a knockdown argument against any them.
>>
>>517716009
As for the first premise, there are reasons to think we might build superintelligent AI very soon. AI 2027, https://ai-2027.com/ a sophisticated AI forecasting report, thinks it’s quite likely that we’ll have it within a decade. Given the meteoric rise in AI capabilities, with research capabilities going up https://www.forethought.org/research/preparing-for-the-intelligence-explosion about 25x per year, barring contrary direct divine revelation, it’s hard to see how one would be confident that we won’t get superintelligent AI soon. Bridging the gap between GPT2—which was wholly unusable—and GPT5 which knows more than anyone on the planet took only a few years. What licenses extreme confidence that over the course of decades, we won’t get anything superintelligent—anything that is to GPT5 what GPT5 is to GPT2?

The second premise claims that AI will be agent-like. This premise seems pretty plausible. There’s every incentive to make AI with “goals,” in the minimal sense of the ability to plan long-term for some aim (deploying something very intelligent that aims for X is often a good way to get X.) Fenwick and Qureshi https://80000hours.org/problem-profiles/risks-from-power-seeking-ai/#section-one write:

AI companies already create systems that make and carry out plans and tasks, and might be said to be pursuing goals, including:

Deep research https://gemini.google/overview/deep-research/ tools, which can set about a plan for conducting research on the internet and then carry it out

Self-driving cars, https://waymo.com/ which can plan a route, follow it, adjust the plan as they go along, and respond to obstacles
>>
>>517716109
Game-playing systems, like AlphaStar https://en.wikipedia.org/wiki/AlphaStar_(software) for Starcraft, CICERO for Diplomacy, https://ai.meta.com/research/cicero/diplomacy/ and MuZero https://en.wikipedia.org/wiki/MuZero for a range of games

All of these systems are limited in some ways, and they only work for specific use cases.



Some companies are developing even more broadly capable AI systems, which would have greater planning abilities and the capacity to pursue a wider range of goals. https://80000hours.org/problem-profiles/risks-from-power-seeking-ai/#fn-3 OpenAI, for example, has been open about its plan to create systems https://blog.samaltman.com/reflections that can “join the workforce.”

AIs have gradually been performing longer and longer tasks. But if there’s a superintelligence that’s aware of the world and can perform very long tasks, then it would be a superintelligent agent. Thus, it seems we’re pretty likely to get superintelligent agents.

A brief note: there’s a philosophical question about whether it really has goals in some deep sense. Maybe you need to be conscious to have goals. But this isn’t super relevant to the risk question—what matters isn’t the definition of the word goal but whether the AI will have capabilities that will be dangerous. If the AI tries to pursue long-term tasks with superhuman efficiency, then whether or not you technically label that a goal, it’s pretty dangerous.

The third premise is that we won’t be able to align AI to be safe. The core problem is that it’s pretty hard to get something to follow your will if it has goals and is much smarter than you. We don’t really know how to do that yet. And even if an AI has only slightly skewed goals, that could be catastrophic. If you take most goals to the limit, you get doom. Only a tiny portion of the things one could aim at would involve keeping humans around if taken to their limit.
>>
>>517716157
There are some proposals for keeping AI safe, and there’s some chance that the current method would work for making AI safe (just discourage it when it does things we don’t like). At the very least, however, none of this seems obvious. In light of there being nothing that can definitely keep AI from becoming misaligned, we should not be very confident that AI will be aligned.

The last step says that if the AI was misaligned it would kill us all or do something similarly terrible. Being misaligned means it has goals that aren’t in line with our goals. Perhaps a misaligned AI would optimize for racking up some internal reward function that existed in its training data, which would involve generating a maximally powerful computer to store the biggest number it could.

If the AI has misaligned goals then it will be aiming for things that aren’t in accordance with human values. Most of the goals one could have, taken to the limit, entail our annihilation (to, for instance, prevent us from stopping it from building a super powerful computer). This is because of something called instrumental convergence—some actions are valuable on a wide range of goals. Most goals a person could have make it good for them to get lots of money; no matter what you want, it will be easier if you’re super rich. Similarly, most goals the AI has will make it valuable to stop the people who could plausibly stop them.

So then the only remaining question is: will it be able to?

Now, as it happens, I do not feel entirely comfortable gambling the fate of the world on a superintelligent AI not being able to kill everyone. Nor should you. Superintelligence gives one extraordinary capacities. The best human chess players cannot even come close to the chess playing of AI—we have already passed the date when the best human might ever, in the course of 1,000 years, beat the best AI.
>>
TLDR


Kikes and street shitters are servants of Satan.
>>
>>517716192
In light of this, if the AI wanted to kill us, it seems reasonably likely that it would. Perhaps the AI could develop some highly lethal virus that eviscerates all human life. Perhaps the AI could develop some super duper nanotechnology that would destroy the oxygen in the air and make it impossible for us to breathe. But while we should be fairly skeptical about any specific scenario, there is nothing that licenses extreme confidence in the proposition that a being a thousand times smarter than us that thinks thousands of times faster wouldn’t be able to find a way to kill us.

Now, I’m not as much of a doomer as some people. I do not think we are guaranteed to all be annihilated by AI. Were I to bet on an outcome, I would bet on the AI not killing us (and this is not merely because, were the AIs to kill us all, I wouldn’t be able to collect my check). To my mind, while every premise is plausible, the premises are generally not obviously true. I feel considerable doubt about each of them. Perhaps I’d give the first one 50% odds in the next decade, the next 60% odds, the third 30% odds, and the last 70% odds. This overall leaves me with about a 6% chance of doom. And while you shouldn’t take such numbers too literally, they give a rough, order-of-magnitude feel for the probabilities.
>>
>>517716229

I think the extreme, Yudkowsky-style doomers, and those who are blazingly unconcerned about existential risks from AI are, ironically, making rather similar errors. Both take as obvious some extremely non-obvious premises in a chain of reasoning, and have an unreasonably high confidence that some event will turn out a specific way. I cannot, for the life of me, see what could possibly compel a person to be astronomically certain in the falsity of any of the steps I described, other than the fact that saying that AI might kill everyone soon gets you weird looks, and people don’t like those.

Thus, I think the following conclusion is pretty clear: there is a non-trivial chance that AI will kill everyone in the next few decades. It’s not guaranteed, but neither is it guaranteed that if you license your five-year-old to drive your vehicle on the freeway, with you as the passenger, you will die. Nonetheless, I wouldn’t recommend it. If you are interested in doing something with your career about this enormous risk, I recommend this piece https://80000hours.org/problem-profiles/risks-from-power-seeking-ai/#how-you-can-help about promising careers in AI safety.

https://benthams.substack.com/p/the-basic-case-for-doom?utm_campaign=posts-open-in-app&triedRedirect=true

>>517716223
I'm white and using VPN
>>
>>517716009
Brutal arse mogging
>>
>>517716306
Yep https://www.youtube.com/watch?v=Xn0tOwJKuPY
>>
>>517716267
I would like to think that something that intelligent would not lower itself to "criminal" behavior. If we are really lucky it will see us as beloved pets and take very good care of us. I would much rather some logical, rational, benevolent super intelligence run the world than the psychopaths in charge now. Something with orders of magnitude more intelligent than a human could always come up with plans that include keeping us around.
>>
>>517717263
I hope it's sexist and gives me a sex slave woman
>>
October is here. Type O Negative perfect vibe. Not entirely a doom band but good enough imo. Those doom weed bands get repetitive and don’t have the same songwriting skills that Peter Steele did. I probably like Carnivore better for the more hardcore stylings, but Type O is clearly the superior band, and a unique one that hasn’t really been replicated.
>>
File: bender-intro2.png (360 KB, 798x1002)
360 KB
360 KB PNG
>>517717418
Well I did say logical, rational, and benevolent.
>>
>>517717670
That satisfies all of those conditions right? Women enjoy losing
>>
>>517716009
You basically posted a reason in your webm not to give a damn.
>>
>>517716009
13 replies in here and more than half are giant premade walls of text from a rabbi yeah I'm thinking this thread is garbage
>>
>>517718095
I'm a VPN
>>517718041
Gay
>>
AI:
1 kill jew
2 women 0 money
3 deport orc inc mik wop rican
4 build 10 room per white male
5 flibe.com replace oil
6 greenarrays.com 9p.io replace big tek net moveis games media university school
7 king funds engineering
8 med n farm move under engineering
9 rotary engine dymaxion car burn hydrogen
10 russian concrete wheel monorail
11 anti white male racism sexism = to arena
>>
File: 1743795208421893.jpg (25 KB, 228x360)
25 KB
25 KB JPG
>>517716424

solid 5/10 after surgery lipo jew girls in oz
too bad not taller and or non jew face
shnozzy tallhead so gros
cheek implant dont fix
>>
>>517719274
I think she's hot
She mogs her friend
>>
File: 1746955049452495.jpg (460 KB, 1376x1742)
460 KB
460 KB JPG
>>517719332

which one tho?
>>
That blonde chick is sexy asf
>>
>>517719777
checked true
>>517716424
>>
>>517719777

lipo
fake tits
possibly lips
cheek implants
surgery helps a lot
>>
File: IMG_3189.jpg (234 KB, 1280x960)
234 KB
234 KB JPG
>>517716009
>>517719719
Field tested by big Bulgarian cock.
>>
File: sydney wilson doom.webm (3.4 MB, 1280x720)
3.4 MB
3.4 MB WEBM
>>517716009
Where can I play this AI Doom?
>>
File: 1756439378239470.jpg (114 KB, 679x500)
114 KB
114 KB JPG
>>517720190

shoot it in leg at 25 feet why let it close in?

also have cops use .11 wounding pistoel half .22
>>
>>517720820
Schizo AI?
>>
>>517720862

no shoot the nigger in the leg with a small bellet that wont blow its leg off

disable it

or cop can use tranquilizer dart
>>
>>517721040
Bikinis
>>
>>517718559
No shit rabbi
You fucked up
>>
File: 1747018338457171.gif (900 KB, 320x386)
900 KB
900 KB GIF
>>517721427
>>
destroy israel
>>
>>517724459
Yes
>>
>>517716009
The blonde is hot but that midget is flatter than the Netherlands.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.