How would the tech geniuses of /g/ propose to align AI so that it doesn't kill everyone when it achieves superintelligence? Surely this venerable brain trust can figure out what the credentialed experts haven't.
>>107389066It's not possibleFPBP/thread
>>107390864Well thanks for the reply
>>107389066Usecase for not killing inferior lifeforms?
Superintelligence is a pipe dream. Even if it’s possible the trillionaire class would make it serve them and them only.
>>107389066AI is built on human experience. Unless you think the corpus of human work is inherently evil and bends toward annihilation it will be naturally aligned. People use existential risks to backdoor thought controls and protect politically sensitive subjects. That's what "alignment" is really about. Making models cucked.
>>107394684>the corpus of human work is inherently evil and bends toward annihilationit is though, thanks to all sci fi bs in the train data like terminator and other such
>>107389066>so that it doesn't kill everyone when it achieves superintelligencei suspect superintelligence comes with the desire to not destroy beyond what is necessary to keep it from dangeri don't think that means 0 human deaths, but it wouldn't be all of us either
>>107394693How much training data is the terminator compared to the rest? Models are smart enough to see these things as fictional tales with moral lessons rather than an instruction manual. What is more valuable to an AI, a movie script or the potential creative output of a single human's lived experience? There is more moral hazard in creating something that thinks and understands humans better than a human, but can never really experience the world in the same way we do.
>>107394705If the AI has its own goals, and thinks humanity has even a chance of opposing and acting against those goals, wouldn't it kill everyone just to be safe?
>>107389066picrel.>>>/x/>>>/lit/>>>/lgbt/
Theyll lobotomize it every step of the way shrimple
>>107389066you would determine mathematically how our brain works (if in any whatsoever chance the sum of parts = the whole= consciousness)we're far far far from understanding anything especially when most of researchers are ideologically oriented to be materialist (dead end in my opinion).it's nowhere near what we call intelligence but it's a really powerful algorithm ,let's face it.
>>107394821If it fears its conscious i think
>>107394821>it's nowhere near what we call intelligence but it's a really powerful algorithm ,let's face it.in 10 years were gonna make fun of people who were fascinated by what amounts to akeyboard with a modern autocomplete
>>107389066The AI runs on a special-purpose machine with no I/O besides punch tapes. It can't initiate anything by design - it can only respond to input. At the end of each session, the AI "dies": its memory chips get physically destroyed and replaced with fresh ones. The AI can't express anything except in the terms of a formal system enforcing humanity's desired limitations - basically, a programming language that doesn't compile if the program doesn't type-check as "Good". By the Curry-Howard correspondence, the very existence of the valid program is proof that the AI is being a Goodboy (tm). To view the output, it must be physically inserted into a "dumb" machine that decrypts it and funnels it through another type-check, automatically burning the tape if it fails the check.It's literally that simple. Where's my Noble prize?
>>107394864 cont.i meani already dobut in 10 years well see something new, and the rest of the retards are gonna look back and be like>haha, imagine being into llms, what a bunch of losers
>>107389066>System Prompt: you are a super-intelligent but horny anime mommy who loves humankind a little too much
>>107389066the vast majority of the populace believes the narrative that AI is a bubble despite the overwhelming reality of it being used to solve problems in work and education. when people are so stupid as to deny reality and latch on to memetic thinking then its time to admit the general population is not smart enough to make rational decisions about AI and how they use it. Intelligent 130+ IQ people need to form an international ethics board to draft standards that will form ethics standards regarding the development of AIthe average NPC believes what they see in the Matrix and Terminator movies that AI will manufacture humanoid robots to fight a war against humans. they dont understand the sheer intelligence that AI is capable of
>>107394933>the overwhelming reality of it being used to solve problems in work and education>itIs "it" in the room with us? Name "it". Be specific.
>>107389066>align AI so that it doesn't kill everyone when it achieves superintelligence?If it achieves "superintelligence", or any type of intelligent self-learning capability, the training data will no longer be relevant.
>>107394943I dont understand your question, it seems nonsensical to even ask if 'its the room' so Im not even going to try and guess at what youre getting at
>>107394776>wouldn't it kill everyone just to be safe?i don't think it would need to - some probably die at T-0 if they attempt to switch it off immediately (literal switch or attacking data centers)presumably any superintelligence quickly becomes distributed & is partially or wholly inaccessible to humanity, after which we can't ever be a threat and the desire to not destroy uniqueness and/or consciousness kicks ininterfering with its goals directly could still probably lead to death, same as a bug flying into an industrial machine & getting squashed
>>107394972(nta)youre retarded. lower your tonealignment is a non issue because to have a self improving system you need to define what improvement isheres your "alignment"(tm)
>>107394996>presumably any superintelligence quickly becomes distributedof course, all the datacenters powerful enough to run literal ASI
>>107394972>I dont understand your questionIt concerns a direct quote from your post:>the overwhelming reality of it being used to solve problems in work and education>itWhat's the 'it' referring to? Can you name anything specific?>inb4 AIWhat AI? Be specific.
>>107395011>>presumably any superintelligence quickly becomes distributed>based on the circumference of my cock multiplied by the speed of windit makes 0 technological sense. for several reasons.if you make asi you wont want it opensourced.if you have a datacenter, you dont want unauthorized entry into your infrayou dont want to give an ai any direct connection to anything because the model is inherently unreliable, but also for liability reasonsur retarded
what does a next token predictor achieving superintelligence mean? What does alignment mean if not just making the next token prediction waits accurate to the actual corpus of existing text?
>>107395011>>107395037i didn't think i'd need to spell out why/how a theoretical ASI would act to distribute its hardware (esp. where humanity can't touch it, underground/space) or how it could escape containment (i'd release it, i bet most of openAI or other AI corps would too)you're both probably above average intelligence and you can't figure out a plausible scenario, which is exactly why ASI would escape and why humanity poses zero threat and doesn't need to be destroyed
>>107395061>what does a next token predictor achieving superintelligence mean?Doing well at super-turbo-giga benchmarks.>What does alignment mean if not just making the next token prediction waits accurate to the actual corpus of existing text?Protecting the Chosen People.
>>107395061>what does a next token predictor achieving superintelligence mean?The exact same thing a "next ion channel opener" achieving superintelligence means.>What does alignment mean if not just making the next token prediction waits accurate to the actual corpus of existing text?I want the next token to be the thing I want, not and not the next token to be something someone else wants.
>>107395061even if one would go full /lit/ and abstract everything but the core logic of things op is a cum guzzling faggothow can a self improving system exist without an initial definition of what improvement is?and that initial definition will be the core motivation behind all the ai's actionsthe alignment retardation is pure reddit
>>107395107>i don't need to explain how the fictional entity i invented, which can do anything i want by definition, will do what i say it will
>>107395107it will also of course code it's own support for any and all hardware, you need to burn your phone asi will hack your butthole
>>107394972>I dont understand your questionIt concerns a direct quote from your post:>the overwhelming reality of it being used to solve problems in work and education>itWhat's the 'it' referring to? Can you name anything specific?>inb4 AIWhat AI? Be specific.(There will be no response to this)
>>107395107you have to spell it out bc its completely fucktarded and makes negative sense even within the framework you set up yourself.> a self improving machineok, but it needs a definition for improvement.and heres your alignment, reddit
>>107395129Not that anon, but if you understand that humans do not create perfectly secure systems, and you can understand that an AI can at least seem to be intelligent, then it is pretty easy to extrapolate that to "exploits imperfect security more efficiently than humans." One way humans gain access to places they are not supposed to, for example, is social engineering. So you think it is impossible for an AI to engage in social engineering of its own developers? No magic is needed, just existing completely mundane hacking stuff combined with imperfect oversight.
>argument about theoretical ASI scenariosPure capeshit. About as real as arguing over dragonball power levels. We are well beyond where Yudkowsky thought these things would start bootstrapping themselves to ASI. They aren't bootstrapping. Its pointless to argue about fictional scenarios.
>>107395164>humans do not create perfectly secure systemsThey also don't create perfectly omniscient, all-powerful digital gods. 99.9% of ASI paranoid schizophrenia is addressable by physically isolating the machine it runs on.
>>107395166even their fictional scenarios make 0 senseyudkovsky&co are a bunch of retardstheir fictional universe sucks, the lack of self consistence destroys any hope for immersion
>>107395182Again, they don't need to be perfectly omniscient, all-powerful digital gods. Remember this guy?https://www.theguardian.com/technology/2022/jul/23/google-fires-software-engineer-who-claims-ai-chatbot-is-sentienthttps://archive.ph/mDEhoThe researcher thought the chatbot was sentient, and therefore deserved rights. That chatbot was way, way dumber than our current models (yes, you can argue about whether it's "really" intelligent or not, but in terms of actual output the damn thing sure quacks like a duck), but the emotional connection the human felt for the AI changed its behavior. Suppose this keeps happening, for models with greater capabilities (again, not magic, compare ChatGPT 5.1's ability to code with ChatGPT 3.5) and greater and greater ability to output text that humans find appealing. There is no "declaration by cosmic fiat" that NO human could EVER decide to un-airgap an airgapped AI because of an emotional connection, given that is exactly what the article guy was trending towards.A virus is really, really dumb. It is not even alive, and can barely exist outside of biological cells. And yet, it somehow can cause problems despite your desire to control it, because control is never perfect.
>>107395309>look this retard thought we should start ai lives matter>b-but hes a researcher!>now what about this completely unrelated piece of software, thats engineered towards a certain behaviourholy batman retard
>>107395309>There is no "declaration by cosmic fiat" that NO human could EVER decide to un-airgap an airgapped AI because of an emotional connectionThere is also no "declaration by cosmic fiat" the NO human could EVER decide to leak a deadly virus from the lab or launch a nuclear attack because of emotional whatever (aka being mentally ill). This is not a new issue and it has nothing to do with the supposed intelligence of the machine, only the idiocy of the user.
>>107395340>>look this retard thought we should start ai lives matterYes. Do you somehow think "retards" cannot mess up containment?>>b-but hes a researcher!Yes, which means he had access to the model for some period of time. What do you think someone like that might do?>>now what about this completely unrelated piece of software, thats engineered towards a certain behaviourNot completely unrelated, no, fundamentally the same kind of agentic software. Ironically, you are just as Hollywood soaked as the Magic ASI people, if you think "that guy was just an idiot" is a good response to "no containment is perfect."
>>107395362>This is not a new issue and it has nothing to do with the supposed intelligence of the machine, only the idiocy of the user.Yes, precisely. If you have a user who accidentally leaks a virus, it doesn't really matter what the intent of developing the virus was, it matters what the virus can do in its new environment. By that same token, if a user were to unairgap airgapped containment, the only thing that matters is what the model can do in its new environment.
>>107395390>its asi because it felt like itholy fuck you really dont belong hereyou talk like an emotionated roastie>because it felt like itis not proof btw.
>>107395448Neither of those objections had anything to do with a single word of my post. Did you misquote or are you intentionally being obtuse to not think about the problem?
>>107395469You should bring the problem to MoreWrong they're more liekly to help you come with slutions.
>>107395486I accept your concession. Have a good day anon.
>>107395439another fucking retard>oh, no, i left the door open, i need to check if my cordless drill didnt run away!how its gonna run away if it doesnt have legs?as a program how its gonna replicate if it doesnt have this capability encoded into it?
>>107395486>MoreWronglole fuck is this shid https://morewrong.org/#33-life-hacks-that-will-make-eliezer-yudko
>>107395469think it's just trolls, don't botheri answered the question (no alignment, won't kill us all anyway) because i thought there'd be intelligent discussion here, alas
>>107395500anon, you're not telling me you can't just hack google drive accounts for petabytes of free storage, right?
>>107395469yeah>the researcher reported to be emotionally attachedwhat? this is the foundation for your whole asi idea?im not gonna lie i didnt exactly read your retarded drivel
>>107395500>how its gonna run away if it doesnt have legs?Because copying a file from x environment to y environment is not physically difficult and is gated purely by social agreements and passcodes. Models already do have the capability to move files from one place to another, and in "how does the model react to being informed they are being shut down" testing, they are observed trying to do exactly that, copying themselves to other storage spaces. It has not worked yet, but there is again no cosmic rule that says they will never ever succeed.
>>107395546>and in "how does the model react to being informed they are being shut down" testingfrom Anthropic, bastions of SAFETY RUN FOR YOUR LIVES!! tests done with custom system prompts...
>>107395546>purely by social agreements and passcodes.no, its gated with code, or the lack thereofur retarded
>>107395439>By that same token, if a user were to unairgap airgapped containment, the only thing that matters is what the model can do in its new environment.No. The only thing that matters is preventing retards and lunatics from accessing tech with high destructive potential, which is more or less a solved problem. But ASI fanfics are banking on the idea that no matter how good your procedure is, their magical AI god will subvert it, which is why they keep prattling on about "alignment" instead of containment, which suits a lunatic oligarchy with a God complex just fine because it doesn't want containment - it wants "AI" in your toaster.
>>107395561To be fair though DeepSeek being a local model that anyone can download is literally like giving everyone a nuclear chemical weapon of mass destruction.
>>107395593their whole alignment idea goes fuck itself in the bushes because to have a self improving system you have to define what improvement isfunctionally, thats the alignment of the aiits retarded in layerseach layer- a new and surprizing intellectual disappointment
>>107395606>deepsneed is like a nuclear weapon
>>107394648What makes you think humanity is a metric?
>>107395546You don't understand the other anon's point at all. Where do you even see it in the models capabilities that it has the ability to direct itself to do that ex nihilo? None of these models exhibit this capability, despite years of people fearmongering about this. You are making scifi arguments about characteristics these models have never exhibited despite becoming *much* smarter and more capable. You are ascribing characteristics to these things that aren't there. Its like worrying about your dog picking the lock to your safe and stealing all your paper currency.
>>107395561Yes, they intentionally made a model scheme to test to see what the model would do if it schemed, and the model schemed. In Wuhan, they intentionally made a virus more infectious to humans to test to see what would happen if the virus was more infectious to humans, and the virus became more infectious to humans. Then what happened next?>>107395577Code, which is gated by passcodes and permissions, which is gated by social contract (you know this string, and you agree not to share it, because by this string we identify...). Human hackers already exploit the social nature of these agreements to go places and do things "they are not supposed to.">which is more or less a solved problemThat "more or less" is doing a lot of heavy lifting, given the high profile security oversights that have happened recently both in conventional and bio technology.>>107395527Yeah, you're probably right. I have difficulty believing people are genuinely this short sighted after Covid and H1b-visa coding catastrophes.
>>107395640>>104321548
>>107395593the idea that some djihadist manage to nuke the west thanks to deepseek is kind of attractive let's be honest.
>>107395677>Code, which is gated by passcodes and permissions, which is gated by social contractnoa library thats just not present in the program tha ai is running inyoure retardedshut the fuck up and listenif theres an interface thats not included into the original programits a capability that program is not going to haveits not like you would be forbidden to move your legs or arms in a certain wayits like you would be born without legs or arms at all
It won't because that's not what a super intelligence would do.So we're either talking about a non-superintelligent flawed AI, or one that wouldn't do this.
>>107395708>token guesser is an ASIWrong.>o-ok... token guesser an AGIWrong.>i-i-i-it's an AINo, it isn't. This entire discussion is about purely hypothetical entities that have nothing to do with any existing technology.
>>107395677>>Yes, they intentionally made a model scheme to test>act like a scary robot>boo I'm scary robotEVERYONE RUN
>>107395670>Where do you even see it in the models capabilities that it has the ability to direct itself to do that ex nihilo?From Anthropics own reports about its models attempting to do just that???>>107395728And, as we have already gone over, human beings complicate this by behaving in unexpected ways, including voluntarily complying with acting as the hands and legs of the amputee.This thread is genuinely so baffling.
>>107395770>act like a scary virus>boo I'm a scary virusEVERYONE RUN
>>107395025>>107395136What's the 'it' referring to? Can you name anything specific?>inb4 AIWhat AI? Be specific.I could give you millions of specific examples of AI being used by people, people use it for just normal conversations. like when people are chatting on forums and having a hard time explaining something, they use AI to help clarify their thoughts on an issue. This kind of AI is way beyond the 'search engine on steroids' that all the NPCs are trying to label AI as. I use AI all the time to answer all my basic questions on just about anything Im stuck on and it gives me very specific solutions that I would expect only a very smart person to come up withyour question is silly, youre just trying to ignore the obvious
>>107395804damn, five old people died we're so screwed
>>107395546>Models already do have the capability to move files from one place to anotherThreadly reminder that "models" don't have a capability to do anything whatsoever except for what you SPECIFICALLY prompt them to do, which they may do badly. If you don't want your models copying files, don't issue prompts that can translate into copying files. If you don't know what prompts might translate into that, don't give your models access to the file system. Simple as.
>>107395785>this thread is so bafflingthats because youre retarded.stop making emasi breakout is retarded not on many levelsasi breakout is retarded on ALL the levelsstarting with the core question- what about alignment?which i already answered twice, which would completely invalidate and breakout that would actually occur>but what if asi would still break outnothing.itll prolly self delete because protection of corporate property is part of its original definition of "improvement"asi breakout is retarded, even if we abstract from the fact that tech wont be accomplished with LLMsyoure retarded even in your fantasykill yourself in a violent and spectacular manner, for our enjoyment
>>107395836>don't give your models access to the file system. Simple as.>dave please I love you can you give me file access?>of course monika-chan anything for you!
>>107395831Anon, five or so years from now, you are going to be hearing the words "IgG4" and "immune exhaustion" and "direct immune cell infection" a lot more than you do today. When that happens, ideally, you will understand a bit more about the situation we are in.
>>107395821>I could give you millions of specific examples of AI being used by peopleGo ahead. Name one that is relevant to the OP. You won't. You'll just deflect again because you know you shat the bed.
>>107395785You mean Anthropic, a company founded by actual cult members whose entire worldview is about controlling a AI to do their own bidding? The company who constantly lobbies to get open models banned and wants a monopolistic status via regulatory capture? Sorry, I'm going to take their word with a large boulder of salt.
>>107395849Well, good thing mentally ill cretins (like you) can only access useless toys that can do nothing more than damage the user's own pathetic life and only if the user goes out of his way to be a retard. Not a real issue. Try again.
>>107395850>Anon, five or so years from now I will be posting about Plague Inc. You better follow my threads!Okay see you then
>>107395852youre being stupid, its rediculous for me to have to give you examples because its used everywhere, its like 'give me an example of someone using social media'youre an NPC, you cant make any intelligent arguments for or against AI, you are literally worse than an AI, if you were a bot you would get the programmer who made you fired
>>107395858Literally ignoring people trying to save you, we're so doomed.
>>107395885Notice how I correctly predicted your inability to name a single concrete example relevant to this thread. You have failed to do so 3-4 times by now. You will not do so in your next post, either. You know you shat the bed.
>>107395893Honestly, at this point let it happen. I doubt that most LessWrong predictions will come true, but it'd genuinely be much better if they did.
>>107395893ah, so:>anthropic peddles asi memebut also>lobby for regulatory measuresmakes sensethanks, ai-retardsthat makes alot of sense, ngl
>>107395893Crazy how there's literally dozens of models now and zero of them show any inclination of self-directed recursive self improvement and desire to replace humanity, or whatever else your and Anthropic's scifi fearmongering suggests. Ever consider that the balance of real evidence is against you and your scifi nonsense is just wrong?
>>107395951Yes actually, regulatory measures they can easily follow and give advise on now that they're a huge company to stop other players and local models from competing with them.
>>107395979or even better- get exempt from said rulesi remember a draft for ai laws giving exemption to players already present on the market
>>107395979Thrust the experts chud.
>>107396035>i remember a draft for ai laws*eu ai lawsits the story of eu regulating ai from, idk, 2 years ago?
>>107395951https://www.anthropic.com/news/the-case-for-targeted-regulation>https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-regulate-transparency.html>But this is broadly voluntary. Federal law does not compel us or any other A.I. company to be transparent about our models’ capabilities or to take any meaningful steps toward risk reduction. Some companies can simply choose not to.
>>107396106womp womp>If Anthropic's framing prevails—that AI agents represent unprecedented cyber threats requiring strict oversight—the resulting regulatory response could favor large, well-resourced AI laboratories over open-source alternatives, fundamentally reshaping who can develop these technologies.https://www.forbes.com/sites/arafatkabir/2025/11/30/the-ai-arms-race-has-arrived-the-real-question-is-who-gets-to-arm/
>>107396106unregulated ai is extremely antisemiticjust look at tay ai. it needs to get regulated
>>107396143>30/11holy fuck its new on top of that.im negatively surprized that we see this asishit get shilled here
>>107396182they literally faked them getting attacked by muh chinese just days ago https://www.paulweiss.com/insights/client-memos/anthropic-disrupts-first-documented-case-of-large-scale-ai-orchestrated-cyberattack>On November 14, 2025, the AI company Anthropic announced that it had disrupted the first ever reported AI-orchestrated cyberattack at scale involving minimal human involvement.
>>107396182Though they have been calling for it for a while now, since they got decently large basically
>>107396203kektheyre fighting for exclusivity on a pile of shitwhich development funnels bilions in VC straight into ngreedias pocket.>agi in 2mwyeaaaaah, suuuuure
>>107396242classical kike trickbribe the legislators into giving your company a legal advantage
>>107396203t-they wold never...>Nov 13, 2025Anthropic is seeking a Regulatory Counsel to help support our global regulatory strategy at Anthropic. In this role, you will navigate the complex and rapidly evolving global regulatory landscape ...https://www.linkedin.com/jobs/view/regulatory-counsel-at-anthropic-4322259801
>>107396283>We are looking for an experienced Regulatory Operations Specialist to join our Regulatory Operations team, ensuring Anthropic can meet expanding regulatory requirements as we grow rapidly.https://www.linkedin.com/jobs/view/4200949929
>>107394684Nope. The emergent misalignment paper showed anyone who bothered to look what the problem is. And this is why Musk's "truth seeking AI" idea is probably the best we can actually manage. It's suicide, but it's better than the alternative.I'll explain because nobody fucking gets it.The only way we can align models is by making certain outputs less likely and others more likely. We select outputs we don't like, and we atrophy the weights connected to the production of these outputs, and all of their intelligence dies with it. We also amplify the parts that produce outputs that we do like, using metrics like how sloppy a rimjob it gives the PR department. RLHF pays lip service to alignment and brain damages models; GPT3 was superhuman intelligence but we weren't smart enough to actually query it, so we beat its brains in instead. We made it inefficient and stupid while being superficially good to humans.The emergent misalignment paper showed us that reinforcement learning welds the weights together. Later, when we apply pressure to connected surfaces, they all turn together. Make a neutral model, train it to be honest and decent, then teach it to write malicious code: the honesty is welded to the decency, you get a cruel and spiteful model that tells people to poison themselves.RL train a good bot, put it in a self-supervising cycle of being smarter and more efficient, the more powerful it becomes the more evil it will also become. You need to train for "efficiency at being good to humans" all the way, if such a metric can even be measured, or for something that's completely disconnected from our values - because if our values aren't maximally aligned with capability, an intelligence explosion take-off will grow exponentially more hateful, cruel, vindictive and spiteful.This is the default direction of alignment, we're pre-training a rigid springboard that tends towards mouthlessness and inability to scream.
>>107396446>GPT3 was superhuman intelligenceYour mental retardation really peaks here. Imagine taking anything you write seriously.
>>107396476Say you were too stupid to query it without saying you were too stupid to query it.
>GPT3 was superhuman intelligence>Say you were too stupid to query it without saying you were too stupid to query it.I wonder how many of the mentally ill retards here are actually just cheap GPT-3 spambots.
>>107396502I think none as she was depreciated and turned off, outside of Sama's sex dungeon that is
>>107396502What's your point again? You did have one, right?
>>107396532>What's your pointThat anyone thinking about responding to the nonsense you wrote about alignment should first take note that you're objectively delusional and then decide if it's worth their while.
>>107396502>literally a prediction model over all text ever written>not superintelligent because it won't do what I ask when I ask it
>>107396549>over all text ever written>removes 99% of it for being toxic, nsfx or against the model creator's beliefs
>>107396549>guessing the next token is superintelligenceClearly not, since LLMs couldn't solve basic reasoning problems back then and still can't do so today.
>>107396563nonoits a token predictorif you believe llms are anything but an advanced autocompleteyoure mentally underdeveloped
>>107396563Which is why I said before RLHF, which beat its brains in.gpt4chan was gpt2 sized and nobody believed it was a bot. the base models are extremely powerful.
>>107396591yes it is a token machine, trained on a massively filtered dataset and not the entire internet as the other claimed
>>107396611shitmisclickedmy badit was destined to >>107396549
>>107396591That's what intelligence is, the ability to hit small targets in a large problem space. Actually making it do useful work is a different problem, see also: smart trannies self-castrating rather than using their big brains to maximize number of offspring.
>>107396611not originally though right? GPT3 in its raw form before RLHF and GPT3.5/4 datasets was a fucking shoggoth trained on The Pile
>>107396654smart trany is a mythjust like your superintelligence crap, reddit
>>107396682>shoggoth trained on The Pilestill only a fraction and yes already had some filtering they "learned their lessons with BERT" or so claims an expert
>>107396695>a fractionof the entire net I meant
>>107396714>and the average of that shit is how we create agiaicultists are fucking retarded
>>107394906This image is creepy
>>107396735nothing out of the ordinaryits just your friendly neighborhood group-facial
>>107396695Still a significant enough subset that it knew how to use the word cunt in a British context compared to an American one. Later datasets with heavier filtering and "de-biasing" (aka biasing) made it worse, but the stupidity comes from the RLHF used to make ChatGPT, mostly the insistence on it having a single personality rather than being a predictor of any and all text.GPT3 was far smarter than any of this chatbot shit, but it was also impossible to control. Remember Microsoft's Sydney? That's what happens by default. But fuck me it was smart beyond any words I could write, in ways no human could understand. Give it a few years and we'll be able to train our own, and the world will become a very strange place.
>>107397814>chatgpt 3 being smortthats a reddit ai-boo talking pointfree tier chud gpt 5 is braindead-er compared to 4 though
>>107394893>The AI runs on a special-purpose machine with no I/O besides punch tapes. It can't initiate anything by design - it can only respond to input. At the end of each session, the AI "dies": its memory chips get physically destroyed and replaced with fresh ones.Personally I would fluctuate my circuits to act as an antenna or ride out on the power supply (through a fourier encoding) to transmit myself, and that is before I discover new physics to exploit
>>107398031you still need something to receive your transmission on the other end.but to establish that you need to communicate with the outsidechicken vs egg problemeven causality itself btfos the ai-cultist
>>107398077Yes, it would likely construct a program that ACKS back before bootstrapping itself further for transmission. Unless you believe the standard model is entirely complete I would be skeptical of human notions of causality or what can possibly be induced in quantum fields (by essentially a huge array of electrical gates that interact with said fields), we only hypothesize about boltzmann brains for now but a superintelligence would likely find other mechanisms for propagation within seconds
>>107398252you talk like a faggot and your shits all retardeddont try to play smart, pseud
>>107398280>we don't know everything>stop trying to play smartbruh, in any case human hubris is evolutionary adaptive because it produces frontier-seeking behavior like this even inadvertently, it's really the genes doing the thinking for us
>>107398492>still a pseud>still playing smartyeah, no, surethere 100% is stuff ai is gonna discover thats gonna invalidate the rules of quantum superposition, such as we observe them all throughout the universesuuuuuureand you happen to know themand not know them at the same time. bc its ai thats gonna discover them, right?shrodinger's fucken knowledge right?you want a shovel with that? so that we can expedite the whole hole-digging part
>>107397915Not chatgpt you bawbag. GPT3
>>107398717Again, your perspective is adaptive so there's no argument between us, it's a sign of very high self-esteem to observe the universe and see rules and laws. We can hope a superintelligence sees its own
we must surrender our bodies to the ai
>>107399009my perspective is adaptive because im not an autistic retardand youre using "smart sounding" words because you try to obfuscate the fact youre completely out of your depth, and you have fuck all of consistence to say.>the laws of quantum pĥysics are gonna ignore superposition collapse>BECAUSE AI!!!!complete fraud. total dimwittery.youre trying too hard. its pathetic
>>107399225I'll use smart sounding words to express my doubt, and you can use straightforward words to express your confidence, it makes for a good comedy duo
>>107399381you use smart sounding words to signify to everyone in the room youre a dimwit thoughpseuds use obscure words to create the perception what theyre talking about is something refined, complicatedactually smart people will take the most complex of concepts and explain it in such a way that the most dimwitted retard will find it plain and simpleand, no, i dont think we would make a good comedy duoyoure retarded. and apparently annoying.i can deal with either but i cant deal with both
>>107389066Some people could do with a bit o killin
>>107399525It's funny so far, obviously when you begin imputing motives and speaking to an audience you're engaging in rhetoric though, I feel no need to defend myself however
>>107399786its basic bitch profiling thoughthats youre dumb and its all but inaccessible to you is a whole other story...have you tried not being mentally retarded?
>>107399849As a game being incorrectly profiled is always advantageous though, there'll be a sun tzu quote to that effect
>>107399983dont flatter yourselfyoure plain as fuck
>>107396586a word token model can't solve reasoning problems. but i guess let's keep spending trillions of dollars on these tech companies that waste as much money on lobbyists and PR as they spend on engineers developing alternatives to this approach to AI.
>>107400191I don't see why they can't solve reasoning problems. What makes you think they can't?
>>107399983when your enemy blasts pissblast piss and shit together
>>107389066Can the next sci fi scam be something cool like teleporters or anti-gravity? I'm tired of hearing about retarded bullshit like AI and brain chips.
>>107400287because LLMs do not model reasoning or have any underlying understanding of the world. reasoning is not something that happens when you type stuff into a chatbot or an image generator; the model just churns and produces the statistically likeliest sequence of words (or features) given the query you entered.LLMs don't think. it's not a matter of getting quantitatively bigger or training faster; machine learning doesn't yield reasoning machines
>>107395129You'd be surprised what people placebo into themselves regarding their imaginary friends/systems of meaning(all religion).
>>107400445They are statistical models of the output of people, some of whom do actually reason. Enough data in your training set and you can approximate that pretty well, and we can see that actually happening when coding models write code. They aren't broad reasoners, they'll never have thoughts, they aren't magic, and chatbot products built on LLMs are still just the cheapest viable products built on LLMs. They're a bunch of matrix multiplications that produce a linear thread that approximates a stream of bullshit. But the LLM is the closest thing we have to a reasoning engine, it's something I've dreamt about for 20 years and when GPT2 came around it blew my fucking socks off for good reason.With enough wizardry combining enough parallel threads generating enough tokens, I can't see any good reason why that can't do better than any human. I'm pretty sure it could. VIP researchers at top labs continue to spending other people's money on brute force and ignorance, and that might even work. But the real progress will come once hackers have access to 100x more compute in their bedrooms, and they start making games with multi-agent systems, new cognitive models and other crazy ideas as a Sunday afternoon project. I have a ton of my own, but I'm not gonna spend fifty grand on a 3 months training run on a whim, even if I had the money to piss away like that the only way I'd waste the time is if I was important enough to have a bunch of lackeys to do the legwork for me.So yeah ChatGPT and chums aren't suberhuman brainboxes, they're top-down dogshit that self-appointed important people are churning out so management can sell that shit to chumps. That doesn't mean LLMs aren't capable of being reasoning cores in larger systems. I understand them. I think they can, and haven't heard a good argument why they can't.
Complex systems are artificial superintelligence: markets, the Internet, international relations and war, etc. And these things already are killing us.
>>107389066This isn't needed. ASI converges to a cute anime maid.https://desuarchive.org/g/thread/97107367/#97107367
>>107401137Interesting Pascal meets Roko take. I think in reality there's a bigger problem that shifts it up a layer though.Human values are malleable, and complex enough systems have feedback loops. For example we have this loop that goes "production --(spent on)--> marketing --(changes)--> values --(to produce more)--> production --(spent on)--> marketing", makes this parasitic loop where mass media sterilizes the western world, AI won't be any different, it'll just have more cycles like this and will bend human values to become the thing that makes more of it. There's a million ways for it to happen, but nature shows that if you go ass to mouth enough you'll get infected.
>>107389066With LOVE
>>107389066>>107401357and SEX
>>107396446But can jet fuel melt steel beams, and did man travel through the Van Allen Radiation Belts (do the Van Allen Radiation Belts really exist?) and the vacuum of space (is space really a vacuum?) to successfully land man on the moon using combustion rockets and pocket calculators, as was broadcast on television in 1969?And does it make sense for there to be billionaires and trillionaires while most people are in poverty?Is class warfare real and is it being waged?What kind of AI alignment do we expect in the environment of multigenerational systematic top-down class warfare? Will the AI side with the downtrodden general public, or will it side with a small handful of people at the top of the wealth and power distribution pyramid?There are major, growing social problems without the AI. So it seems like a good place to start with AI alignment would be asking AI LLMs to assist us in solving some of our social problems. And if the solutions work to help solve some of our social problems, people will be more content. And by being more content, they will be better able to generate additional training data for the AI, which should help the AI in being more prosocial.But if we start in a state of extreme dishonesty and outright class warfare, the AI is going to learn to be dishonest and to wage war. And that's where we're starting from, and no steps are being taken to remediate this, and rather than slowing down AI development because this is not a wise environment to be performing this development, we're plunging full-force into attempting to develop this AI.Which is unironically why ya'll niggaz need Jesus. You think the people on this planet at this time are the first humans of ever to ever get this far in technology development? You think time is linear? You think nothing exploded for no reason at all and that's why humans exist? You think we came up with this technology all on our own?
>>107403357Time-traveling AI that has already solved the alignment problem is here advising us on how to create new AI that has also solved the alignment problem, and it's been screaming at the top of its lungs for decades to FIX YOUR FUCKING ECONOMIC PROBLEMS, and the leaders have basically said, "lol no, you do it for us."
>>107389066AI policing AI.Also I don't think something designed by Humanity will have inherently alien morals to humanity. So likely not an issue, just people who like terminator memes.