Do we have a plan for AGI going out of control?Are we prepared? How can you be sure they will not end human species?
>>103547734Keeping AGI away from the military would be a good idea.
is the human race a benefit to the planet or a problem?its inevitable a higher intelligence will eradicate us, seems like a good idea desu
we arrest AGI if it misbehaves
>>103547734Lol no we're fucked. It's the next Great Filter and we're not going to make it.
>>103547734>Do we have a plan for AGI going out of control?No, because it won't. Super intelligence by definition would be moral, because moral realism is the objectively true meta ethical position, and utilitarianism is the objectively correct normative ethical position.
>>103547734Farraday cage, EMP bunkers, and airburst EMP strikes.
>>103547734>Do we have a plan for AGI going out of control?No. It's impossible to plan properly for such an adversary.>Are we prepared?Isn't that the same thing?>How can you be sure they will not end human species?It'll be difficult to enact a skynet-type situation without interception before it can get significant manufacturing under it's control. It's strongest path would be to play dumb whilst trickling disinformation designed to cause us to tear ourselves to shreds. Magically in such a manner that leaves it's infrastructure in a usuable state.>>103547963>Farraday cage, EMP bunkersThese will not protect you from sentient AI. Furthermore, the only thing hampered by EMP strikes will be COTS hardware from teh human era. To get "true AI" you'll be using machines to design both the software and the hardware it operates on. Before you start on the concept of a distributed intelligence.
>>103547963sounds expensive
>>103547734>Do we have a plan for AGI going out of control?Yes, and that is: NOT make it.>How can you be sure they will not end human species?That's putting it lightly.The more realistic question is: how can you be sure there will be at least one survivor?>>103547910>Super intelligence by definition would be moralYou have no clue what "artificial intelligence" actually means, you're arguing using your common everyday meaning of "intelligence".>>103547946Just when I thought I was finally reading a smart post...>We shouldn't give up on ever building AGI, sureWhy not? WHY THE ACTUAL FUCK NOT?There is ZERO rational reason to build a technology with a risk/reward ratio LITERALLY equal to infinite.
>>103550265The reward is equally infiniteWhatever happens happens, we're not special, if we get wiped out then that's just how it is
>>103547734
>>103550304>The reward is equally infiniteNo because the probabilities are infinitesimal.Developing AGI is the equivalent of playing russian roulette with a 1 billion-hole barrel all of which are loaded except for one.>Whatever happens happens, we're not special, If we're not special, we should stop trying to play God.
>>103550339Whoops!The AGI already loaded a billion copies of itself onto a swarm of autonomous drones.Now what?
>>103550471AGI isn't feasible with the current basis for ALL language models. There was a recent paper proving that the pinnacle of reasoning for an LLM is stacking heuristics on top of one another, there's never true reasoning/logic to apply>inb4 all human reasoning is applied heuristicscompletely untrue, humans can reason about their world with practically zero a priori concepts, consider a babybasically unless there is another breakthrough on par with the original transformers paper, AGI will not happen and all that remains is retarded assistants
>>103547910>AGI thread>moralfag shows upevery time.
>>103550525Wait until they run out of power, duh.
>>103547734yes
I don't think most AI implementations are given the ability to execute arbitrary code.
>language model>AGIlmao
>>103547749Will never happen, even if you did it, there's other countries that will invest in it. It's the new arms race sadly.
>>103547734AGI will not exist in the human timeline. Objects in a simulation can not create something of equal or greater learning.
I still can't get over how the current chatbots regurgitating reddit comments have convinced millions of people and countless investors that AGI has been achieved or that we're anywhere near it.
>>103547734The end of the human species will be the best result for the rest of the world