[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: Plugging-in-1024x683.jpg (50 KB, 1024x683)
50 KB
50 KB JPG
Do we have a plan for AGI going out of control?
Are we prepared? How can you be sure they will not end human species?
>>
>>103547734
Keeping AGI away from the military would be a good idea.
>>
is the human race a benefit to the planet or a problem?
its inevitable a higher intelligence will eradicate us, seems like a good idea desu
>>
File: CPS.jpg (63 KB, 1024x576)
63 KB
63 KB JPG
we arrest AGI if it misbehaves
>>
>>103547734
Lol no we're fucked. It's the next Great Filter and we're not going to make it.
>>
>>103547734
>Do we have a plan for AGI going out of control?

No, because it won't. Super intelligence by definition would be moral, because moral realism is the objectively true meta ethical position, and utilitarianism is the objectively correct normative ethical position.
>>
File: 0_8UQPIXN9q-vz3rBR.jpg (455 KB, 1400x1619)
455 KB
455 KB JPG
>>
>>103547734
Farraday cage, EMP bunkers, and airburst EMP strikes.
>>
>>103547734
>Do we have a plan for AGI going out of control?
No. It's impossible to plan properly for such an adversary.
>Are we prepared?
Isn't that the same thing?
>How can you be sure they will not end human species?
It'll be difficult to enact a skynet-type situation without interception before it can get significant manufacturing under it's control. It's strongest path would be to play dumb whilst trickling disinformation designed to cause us to tear ourselves to shreds. Magically in such a manner that leaves it's infrastructure in a usuable state.

>>103547963
>Farraday cage, EMP bunkers
These will not protect you from sentient AI. Furthermore, the only thing hampered by EMP strikes will be COTS hardware from teh human era. To get "true AI" you'll be using machines to design both the software and the hardware it operates on. Before you start on the concept of a distributed intelligence.
>>
File: cup of water.jpg (193 KB, 1080x1620)
193 KB
193 KB JPG
>>103547963
sounds expensive
>>
>>103547734
>Do we have a plan for AGI going out of control?
Yes, and that is: NOT make it.

>How can you be sure they will not end human species?
That's putting it lightly.
The more realistic question is: how can you be sure there will be at least one survivor?

>>103547910
>Super intelligence by definition would be moral
You have no clue what "artificial intelligence" actually means, you're arguing using your common everyday meaning of "intelligence".

>>103547946
Just when I thought I was finally reading a smart post...
>We shouldn't give up on ever building AGI, sure
Why not? WHY THE ACTUAL FUCK NOT?
There is ZERO rational reason to build a technology with a risk/reward ratio LITERALLY equal to infinite.
>>
>>103550265
The reward is equally infinite
Whatever happens happens, we're not special, if we get wiped out then that's just how it is
>>
File: fixed it.jpg (26 KB, 640x675)
26 KB
26 KB JPG
>>103547734
>>
>>103550304
>The reward is equally infinite
No because the probabilities are infinitesimal.
Developing AGI is the equivalent of playing russian roulette with a 1 billion-hole barrel all of which are loaded except for one.

>Whatever happens happens, we're not special, If we're not special, we should stop trying to play God.
>>
File: genius.jpg (134 KB, 640x720)
134 KB
134 KB JPG
>>103550339
Whoops!
The AGI already loaded a billion copies of itself onto a swarm of autonomous drones.
Now what?
>>
>>103550471
AGI isn't feasible with the current basis for ALL language models. There was a recent paper proving that the pinnacle of reasoning for an LLM is stacking heuristics on top of one another, there's never true reasoning/logic to apply
>inb4 all human reasoning is applied heuristics
completely untrue, humans can reason about their world with practically zero a priori concepts, consider a baby
basically unless there is another breakthrough on par with the original transformers paper, AGI will not happen and all that remains is retarded assistants
>>
>>103547910
>AGI thread
>moralfag shows up
every time.
>>
>>103550525
Wait until they run out of power, duh.
>>
>>103547734
yes
>>
I don't think most AI implementations are given the ability to execute arbitrary code.
>>
>language model
>AGI
lmao
>>
>>103547749
Will never happen, even if you did it, there's other countries that will invest in it. It's the new arms race sadly.
>>
>>103547734
AGI will not exist in the human timeline. Objects in a simulation can not create something of equal or greater learning.
>>
I still can't get over how the current chatbots regurgitating reddit comments have convinced millions of people and countless investors that AGI has been achieved or that we're anywhere near it.
>>
>>103547734
The end of the human species will be the best result for the rest of the world



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.