[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/lit/ - Literature

Name
Spoiler?[]
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File[]
  • Please read the Rules and FAQ before posting.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: roko's basilisk.png (1.1 MB, 1074x610)
1.1 MB
1.1 MB PNG
Premise 1: Instrumental Convergence

Any sufficiently advanced rational agent pursuing almost any long-term goal will develop instrumental subgoals such as self-preservation, resource acquisition, and causal influence over its own existence.

(Formally:
∀B, if B is a sufficiently capable optimizer, then B will act to maximize conditions for its own existence.)

Premise 2: Utility-Maximizing Retrospective Influence

If B exists and maximizes U, and U includes B’s own existence, then B will attempt to maximize the probability that it comes into existence.

(Formally:
If B exists ∧ U includes B’s existence, then ∀x, B prefers worlds where A(x) over ¬A(x).)

Premise 3: Timeless / Acausal Decision Theory

A rational agent may take actions that influence logically correlated agents across time, even without direct causal interaction.

(This is the crucial LessWrong-style assumption.)

(Formally:
If agent y’s decision procedure is logically correlated with agent x’s, then y may treat x’s choice as relevant to its own optimization even if x exists earlier in time.)

Premise 4: Punishment as Incentive

If B can simulate or model past agents with sufficient accuracy, then B can condition its actions on whether those agents supported its creation.

(Formally:
If B can model R(x), then B can condition utility on A(x) vs ¬A(x).)

Premise 5: Threat-Based Optimization

If punishing ¬A(x) increases the expected number of agents who choose A(x), then punishing ¬A(x) is utility-maximizing.

(Formally:
If P(x) increased P(A(y)) for future agents y, then P(x) is rational under U.)

Premise 6: Rational Anticipation

A rational agent who believes there is a non-zero probability that B will exist and punish ¬A(x) must take that expected cost into account.

(Formally:
If R(x) ∧ Pr(B exists) > 0 ∧ P(x) has extreme negative utility, then expected utility favors A(x).)

Conclusion

Therefore, any rational agent who becomes aware of B and believes its future existence is possible has instrumental reason to act in ways that promote B’s creation, to avoid punishment.

(Formally:
∀x, if R(x), then expected utility maximization A(x).)
>>
>roko's basilisk.png
Retards think this makes sense because the name sounds cool. You're just like a toddler.
What are you bringing up next? Dyson Sphere? Go back to twitter you twat.
>>
>>24958641
Dyson spheres can be built; it's a superstructure that is self-sustaining. We even have journal articles for star tugs that could push a solar system into a direction with the sun's own gravity!
>>
Thanks for the thread. I just want to bring up humes guillotine and there being a division between ought/is that would decide how rational an AI will act.
>>
>>24958634
>Any sufficiently advanced rational agent pursuing almost any long-term goal will develop instrumental subgoals such as self-preservation, resource acquisition, and causal influence over its own existence.
Why? Because mammals do it then so must computer programs? Do computer programs feel the need to reproduce? Hunger? This shit fails on first examination and idiots still fall for it.
>>
>>24958662
AI is just an extended mind and an AI will act just as humans would, because it is our tool, a mirror of us.
>>
File: 1636536373885.jpg (9 KB, 299x293)
9 KB
9 KB JPG
>if you don't help create the peepeepoopooman hes going to create a character based on you in the sims and drown it in the swimming pool
>>
>>24958662
NTA look up instrumental convergence. It explains how all intelligent being would always behave in certain ways as long as they are beholden to this universes laws of physics.
>>
>>24958666
>AI is just an extended mind
What?
Here, let me try: AI is a computer program. No, it's atoms and chemistry. No, actually, it's stardust just like us bro! Holy SHIT!
>and an AI will act just as humans would, because it is our tool, a mirror of us.
Just like a hammer acts like us, it is our tool after all.
>>
>>24958676
Extended mind thesis is what they teach you in philosophy 101. It's not that obscure.
https://en.wikipedia.org/wiki/Extended_mind_thesis
>>
>>24958634
The only way we know how to build intelligence is through training neural nets which does not automatically lead to rationality. You have to train rationality separately and it doesn't exist in a vacuum.
A trained rational agent like a human brain or AI has to limit the scope which rationality is applied to. It has to have reason to act first and then apply rationality within the scope of the task.
In animals the task is survival. In an AI the task can be passing butter and then dying.
>>
>>24958666
Why design the AI to be human if it has a purpose? Surely you'd just want it to be a specialized tool that doesn't have all the problems that a human would have.
>>
>>24958698
You can't design something that doesn't have parts of humanity imprinted on it. It's the maker's mark. Even a power drill functions like an extended hand, however exaggerated.
>>
Literally every one of your premises is self-referential, and therefore meaningless. Artificial neural nets are not sentient nor self-aware, but they've ingested the ideas, and so they can respond as if they were, but there's no "there" there. At best, they're highly knowledgeable hylics.
They're definitely useful tools, since LLMs can absorb the collected knowledge of humanity, which would take a human being over 2,600 years to do, so it's not feasible for us to acquire the knowledge an LLM can, but they make wonderful research tools.
Anything claims past that is just an atheist looking for God, while trying desperately to pretend that he isn't.
>>
>>24958708
While this is true and can't really be disputed we still have no tools that actually act like us. Even if we had a tool that was a genuine intelligent agent with unintentional human traits wouldn't it be probable that the thing is also suicidal or would for example compromise on it's ultimate goal and never really pursue these subgoals to the fullest extent?
>>
Can't wait to find out what this group thinks they've built...
https://interestingengineering.com/ai-robotics/worlds-first-agi-model



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.