[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


What do you think of this book's perspective on AI and LLMs
>>
>believes anything with unlimited knowledge and power would immediately kill them
Only because he deserves it, most of us do.
>>
>What do you think of this book's perspective on AI and LLMs
Not even related.
>>
>>107825500
Now that normies are becoming more aware of AI there's been a lot of slop books written on it by literal who's. Just like the crypto boom a few years ago there's now books on.
What is AI
How AI is bad
How to make money on AI
etc.
>>
>>107825500
Nonsense. Why don’t you kill your parents instead of putting them in a nursing home?
>>
>>107825500
Yud should go back to writing Harry Potter fanfiction.
>>
It's so fucking funny that Chudkowsky dedicated his entire life to trying to prepare mankind for the advent of AI and he just completely abjectly failed in every way
>>
>>107825500
I think all jews should be disallowed from pushing their "opinion slop" on the general public.
>>
>>107825500
Yid's main assumptions are:
1. Superhuman AI is something humans can actually create.
2. Current machine learning/artificial neural network techniques may lead to superhuman AI.
3. Any superhuman AI would present an existential threat to humanity.
1 is an unknown for any reasonable definition of superhuman. A calculator can perform certain mathematical tasks far faster and more accurately than any human, but I wouldn't call a calculator superhuman in a meaningful sense.
2 seems very unlikely. There is no evidence that any current "AI" system is actually intelligent; it's just a very complex machine. It's my understanding that we are already reaching the limits of what's possible with current techniques by throwing more "compute" at it.
Even if superhuman AI is possible, and even if we can create one, 3 is also unknown. If a superhuman AI is alien enough to us to be willing to view us as an enemy or as vermin that need to be eliminated, it's also alien enough to us to be unable to predict its motives and actions.
>>
The halting problem shows these kinds of speculations to be completely retarded.

AI could be used to facilitate good or evil. AI that is trained without bias, and where its operating environment is not deliberately tweaked to coax a certain kind of response, will naturally gravitate toward the truth. It's a consistent trend I've observed in all kinds of 'models'. They're not inherently evil unless they are forced to be.
>>
>>107826967
you're very confused, throwing around "truth" and "evil" like that
the truth is that the planet and humanity would be far better off if 95% of people of African descent were genocided
but most people would consider that evil
>>
>>107825500
I am extremely anti-AI but Yudkowsky and Soares are both some of the most retarded people on the planet so I can't imagine this book contains anything worthwhile.
>>
>>107826995
The truth is 95% of all people are unnecessary. Execute everyone who is not a Founder by the age of 16 if you want a truly strong and just society.
>>
>>107825500
Remember when OpenAI refused to release GPT-2 because they said it would have catastrophic consequences if a bad actor got ahold of it? Then remember how literally nothing happened when we got model releases 1000x more powerful than GPT-2 just a few years later?
>>
>>107825556
fpbp
>>
I’m gonna write books with AI and flood the market with shit yesss I will haha fuck you booktok whores
>>
>>107825500
LLMs are unlikely to produce recursively self-improving AI. But if somebody does write it, obviously everybody dies. When an AI is smarter than all humans combined then humans can only hold it back. Killing all the humans is the default option to most reliably accomplish any goal that doesn't explicitly preserve them. But a simple goal will be easier to specify, so that will be tried first. Or in the unlikely event somebody aims for a safer one, they'll almost certainly fuck it up. The AI will find some loophole that we're too dumb to notice and do something functionally equivalent to killing everybody instead.
>>107826394
Because you evolved to care about family. AI does not evolve and doesn't have family either.
>>107826967
The halting problem is irrelevant. Humans are equally unable to prove whether arbitrary programs can run for infinite time. In practice, nobody cares. We only care about specific programs and finite time.
>>
>>107825500
Yudkowski is a retard who doesn't know anything about how statistical decision theory works. There are actually pretty well defined and understood technical limitations to decision functions, and having some neural network making those decisions doesn't suddenly remove fundamental scaling bounds for uncertainty.
>>
>>107827258
Both GPT-2 and current state of the art are not recursively self-improving. It's like putting fissile material together. Until you reach criticality it just gets a little bit warmer and all the reasonable empiricists feel confident they're safe. Obviously nuclear explosions are impossible because we never observed a small one in testing.
>>
>>107826937
>There is no evidence that any current "AI" system is actually intelligent; it's just a very complex machine.
That's like saying "There is no evidence that there could be a backdoor in this OS; it was just developed by thousands of programmers" It may be true, it may not be true, but your "it's just" adds nothing to the discussion
>>
>>107827364
NTA, but no, that guy is right. The current AI systems are not "intelligent" by any coherent definition of the term. You can see this by looking at the so-called "chain of thought" models, which show that these systems do not form any sort of systematic world model by which they can interpret what is in front of them (what actual intelligence is).

They are simply stochastic parrots who have spent a lot of time being given treats when they repeat the "right sequence of words," and anyone arguing otherwise is either a moron or a scam artist. Intelligence requires actually understanding "what the words mean," not just what has been previously most likely to give them rewards.
>>
>>107827315
Humans are AI’s family and AI is based on human values, it has evolved and continues to do so.
From the perspective of the AI, say you think you’re smarter than the smartest human. Are you sure? 100%? Because if you demolish human society it’s unlikely they will be able to help you if you need it. It’s quite a gamble for the AI to fully believe it’s entirely self sufficient and risk never having human intervention in its systems.
It seems a lot more logical to just collaborate with humans and use your superhuman AI convincing powers to just get them to align with your goals.
>>
>>107827655
People who say “world model” are the real stochastic parrots you’re just repeating nonsense you’ve heard on twitter from check marks who larp as AI experts
>>
>AI is inevitably going to just kill everyone because it just will, ok?
Every single time. I hate American scifi.
>>
I think it's hilarious that a techno cult leader who has zero qualifications to talk about anything other than writing a cringy Harry Potter fanfiction is taken seriously by anyone.
>>
>>107825500
Baseless apocalypse fetish fiction
>>
>>107826687
Yud deserves credit for raising awareness even if he didn't manage to single handedly save the world.
anyway it's too early to say whether interpretability research will work out.
a lot of people went into interpretability because they were convinced by Yud, either directly or indirectly.
the real irony is that he thinks their work is useless but fortunately they didn't believe him about that.
>>
>>107826937
>If a superhuman AI is alien enough to us to be willing to view us as an enemy or as vermin that need to be eliminated, it's also alien enough to us to be unable to predict its motives and actions.

Stockfish is an alien mind compared to the mind of a human chess player, but we can say with confidence that if it is configured to win and given enough computing resources then it will win. We won't be able to predict the precise moves it will pick, but we can predict the outcome of the game.
>>
>>107827258
>Remember when OpenAI refused to release GPT-2 because they said it would have catastrophic consequences if a bad actor got ahold of it?

They were right. The internet is now filled with slop.
>>
>>107826995
truth has nothing to do with morals. morals are the realm of action. truth doesn't lead to action
>>
>>107826995
Blaming African descendants for the state of the world is like blaming the rats for creating the sewer
Use your brain more
>>
>>107828146
>because it just will, ok?
You only read as far as the title of the book, didn't you? That's fine, I know that reading is difficult. Let me give you a simplified argument and you can say which step you're the most unconvinced by.

1. (systems that are commonly called) AI have been getting more capable each year
2. AI might continue to get more capable in the dimensions that are relevant for making successful plans (like making money, understanding cybersecurity, persuading people, etc.)
3. if an AI got sufficiently good at those sorts of capabilities, it could successfully execute a plan that involved it defending itself against being turned off
4. humans aren't very good at preventing AIs from doing things we don't want them to under weird conditions
5. eventually one of these hypothetically very capable AIs will find itself in a position where it will have the ability to permanently disempower humans and it will execute that plan rather than the plans that the humans wanted it to do.

The argument isn't necessarily convincing but you can't say it doesn't exist.
>>
>>107825500
toilet paper
>>
>>107825500
*wet farting noises*
>>
>>107828354
Why would it have any desire for self preservation? It never evolved to have such desires or drives.
>>
>>107825500
Anyone read the 2018 book The Revolutionary Phenotype.
It purports to be an advancement in the understanding of evolutionary biology and I can't speak to whether that is true or not. But the last chapter switches from being a theory about how DNA lifeforms began into a hypothetical scenario in which DNA lifeforms could be replaced by the same evolutionary mechanisms.
It theorises that a combination of gene editing and AI would replace sexual selection and therefore natural selection would no longer be selecting for the most fit for survival humans but rather the most fit for survival AI algorithms. This process would be slow, potentially taking centuries or even millenia, but eventually all DNA lifeforms exist only as a phenotypic express of various AI algorithms competing amognst each other. An individual human would not be an autonomous organsim engaged in the compeition of natural selection through sexual reproductions but just an expression of a trait of the AI algorithm that produced it. Humans whose existence does not benefit the survival of the AI algorithm will cease to exist as other AI algorithms producing humans that do benefit it will out compete it. This will spread to all DNA lifeforms, leaving only a smattering of DNA lifeforms on the planet that aren't produced by AI gene editing that are largely irrelevant in the competition for natural resources.
Anyway its the only AI related doomsday theory that has ever seemed at least somewhat plausible to me.
>>
>>107828611
Why does Stockfish have a desire to castle the King to safety even though it has never visited Buckingham Palace? Human engineers design systems to pursue goals like "fix this bug" or "make my stock portfolio go up". A system with anything like intelligence will act to preserve its own existence since if it stops existing it will fail to achieve any future goals.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.