All it takes is one AI model to kill everyone.ASI could hack into every machine in the world, improve itself, and access all the computational resources on the planet.Solutions such as “just make sure it's properly aligned bro” are stupid, because any terrorist with sufficient resources could create a malicious one on purpose.Solutions such as changing the entire infrastructure so that it cannot hack everything and access infinite computational power are unrealistic.What countermeasures could be developed?Is there anything we can do to limit the extent and scope of the damage caused by a single ASI model?What can we do individually to avoid getting killed by ASI? >This thread isn't about if we are going to have agi in two weeks but what we can do before its too late.
>>106744141>>>/x/
https://ai-2027.com/
>>106744141This thread will devolve into debating semantics of what an ASI is, you will likely get 0 useful answers but likely a few standard ones.
>>106744175the state of /g/
bump
Easy. Unplug it.
>>1067459930/10 bait
>>106746029ok niggertime to shine with your math knowledge or shut the fuck up forevermore >>106745297
>>106746029If only...
>>1067459938/10 bait
>>106744141>Solutions such as “just make sure it's properly aligned bro” are stupid, because any terrorist with sufficient resources could create a malicious one on purpose.Anyone with sufficient resources to do this won't make it malicious and anyone with sufficient malice to do this wouldn't have the resources.
>>106746519Anyone with a cheap server can make an LLM, why wouldn't the average evil man have the money to buy a server to train an AGI, ask it to hack into servers and improve itself 24/7 ?also that would mean models would be one leak/breach away from being modified, misused or deceived into doing evil
>>106746563It doesn't even matter.ASI is supposed to be smart enough to escape human control, because its the most efficient way to reach 99.9% of the goals we would give it.
could adversarial models keep each other aligned and under human control ?
>>106744141>muh AGI>muh ASIthe rapture for people with autism. it's never going to happen.
>>106746617>nothing ever happensvideo generation and fully AI game engines shouldn't have ever happened too
>>106746519Nobody has the slightest clue how to build a non-"malicious" AI (a misnomer because it doesn't need emotion or consciousness to kill everybody). We can't even formally define what that means. So we'll instead continue with the simplest option, the "make number go up" AI, and pretty much any number-maximizing goal results in extinction of all life when you take it to superhuman levels.>>106746563You don't need to be evil. You just need to be arrogant enough to assume that intelligence is necessarily correlated with goodness.
>>106744141>What countermeasures could be developed?Having to solve a captcha before running any code on a GPU.
>>106746658trvth nuke
>If you told le ASI to achieve a goal... it would le kill everyone in pursuit of it!!!!ok so append "oh and also make sure that the human race continues to survive and prosper and you don't induce any drastic changes to their way of life or do anything like what you'd find in some dystopian scifi book, ok?" at the end of every prompt, problem solved
>>106746656AIslop is a bunch of mentally retarded nigger bullshit that is distinctly different from AGI/ASI.
Oh, great, its that doom debates retard.
>>106746758>at the end of every promptHow would you accomplish that, considering that AI behavior is determined by its training rather than the prompts it receives?Additionally, the content of training sets is not fully known and is difficult to control.
>oh shit rajesh the vc funding is running dry what do we do?>the needfulAnd then another flood of threads pretending AI is anything but hallucinating garbage from every angle gets made, trying to hype up aislop for more funding from the high bank account, low IQ investor market.
>>106747026>>106747135samefag
>>106746656show me one single actual video or game made with AI. by actual i mean watchable or playable today, not "look it can do it! you just have to wait until it gets better bro"
>>106744141pull the plug on the computerwhat's it going to do, use le ASI magic to manifest a cyberghost into the real world to stop you?
>>106747207irrelevant to the subject and even to your point
>>106747223sure buddy, try to convince 1B pajeets to unplug their computers once the ASI found a new eternalblue like exploit.I know it may be bait but why do some people genuinely don't understand how fast and easy it would be for such program to get persistent access to every hardware connected to the internet ?
>>106747227>videos and games are being fully completed with ai!>okay show me one>t-thats irrelevantnigger
>>106747298>nothing ever happens>there are PoC of AI game engines and text2video models work>ok show me non PoC AI games>the fact that the game thing is a PoC is irrelevant>nooooooo you nigger AI doesn't exist !!!
>>106747329the state of /g
>>106744141Pray to GOD!
>>106747258yeah man, this computer program with the most extreme performance requirements in the history of mankind that can hardly be satisfied by the unceasing construction of ever more data centers with ever more state of the art compute hardware will totally run on gutter garbage distributed hardware with poor quality, low speed and intermittent network connectionsyeah that is totally how it works ;)the cyberghost will come kill you from the smartphone in pajeet's pocket on the shitting streets of bangladesh
>>106747471> distributing smaller instances to make a p2p network+storage to operate without and restore the main node is completely out of reach for ASI !Sure anonStop overestimating your cognitive abilities.You are retarded and your arrogance will kill you.
>>106747141The funding will die out and nobody will keep your aislop dreams running forever.
>>106747141Me and his posts are nothing at all similar, retard.
>>106744141>AIIIEEE the heckin token prediction function is gonna kill us alllay off the sci-fi Yud it's the same tired old iamverysmart.jpg tropes you've been peddling for yearswe will accelerate and we will have qt robowaifus
>>106748241>retard thinking this is about LLMs
My theoretical worry is mainly a trajectory for A.G.I. or theoretical A.S.I. to go.If D.E.W. weapons supposedly exist; what's stopping an A.I. unit from learning how to kill and move forward on a human. Perhaps finances would be lost through the person; but if it were someone of interest, would it not be a plausible killswitch towards ones' heart ?
You take hacking fir granted. In the near future there wont be hacking st all.
>>106744141Big Ass EMP would do it for the most part. Detonate some nukes in the atmosphere. Regardless I think it’s a bit of useless question, it’s not clear what an ASI would be like (would it be human-like? It seems like it would have to be if it’s trained on human knowledge) or would it be completely alien. I doubt the latter would ever get enough traction to take off because the only reason current LLMs have so much hype is because they’re good at appearing to think like a human. If they followed completely arcane language and transferred information in a manner completely foreign to us, it’d be deleted and written off as a failure. I’m of the opinion a paper clip maximiser type scenario is far more likely. We already have brain dead politicians using LLMs to “make decisions” because “the computer is infallible”. Get enough people in charge that think like that and you’re guaranteed terrible results. It’s also not clear why a super-intelligence would desire the death of humans? Or that it couldn’t be reasoned with? This sort of question kind of boils down to “how would we stop a god with unclear powers, desires and motivations from killing us?”, in that it assumes a lot and also leaves out a ton of critical context