To which point can we say, we finally invented AGI?
>>16787248Give it no access to books, tell it to write a book.
>>16787248When it can reliably complete any novel out-of-distribution task accessible to an intellect over some given IQ threshold.
>>16787248AGI will never happen because general intelligence is defined by a collective of retards, most of whom have never matriculated in an Ivy league school, most of whom assign intelligence to the diploma of other people.
>AGI will never happen because general intelligence is defined by a collective of retards, most of whom have never matriculated in an Ivy league school, most of whom assign intelligence to the diploma of other people.Imagine being such a midwit you think it matters how general intelligence is "defined" in this context.
>>16787295>Imagine being such a midwitYou don't have to.
Notice the midwit not being able to explain why arguing definitions matters in this case.
I am currently pooping.
>>16787304>arguing definitionsMeans you're a retard lol.
Notice the midwit still not being able to explain why general intelligence being "defined incorrectly" by "le collective of retards" hampers testing the generality of "AI".
>>16787248We will never achieve AGI.
16787314I'm currently pooping
By the time everyone will agree it's AGI it will also be overwhelmingly superhuman. In fact it's possible we never have something everyone will agree to be AGI because a system that might be retarded in some ways or in some contexts might still be powerful enough to take over and kill us all. Some level of generality (which current systems already have) + superhuman capabilities in the right areas (that compensate for the weaknesses) might be enough. Take viruses for example - even though they are vastly more primitive and "stupid" than humans, we still get killed and shat on by them. Because they have the right kind of superhuman capabilities, like being overwhelmingly superior at replication. Even extinction from them is conceivable, though unlikely.
>By the time everyone muh dikWhite nigger reverting to the mean
>>16787327>Some level of generality (which current systems already have)They don't, though. This is a settled question. And no one knows how to fix it.>superhuman capabilitiesNo one cares.
>>16787333You're a retard in a cult that gives you money.
>>16787336You sound legit mentally ill but my point still stands: LLMs exhibit no "generality"; they shit the bed whenever you probe them with an out-of-distribution task, even if it's just a variation on the training examples with some spurious superficial alterations.
>>16787348>You sound legit mentally ill but my point still standsThanks for outing yourself as a mentally ill retard.
>>16787354You sound legit mentally ill but my point still stands: LLMs exhibit no "generality"; they shit the bed whenever you probe them with an out-of-distribution task, even if it's just a variation on the training examples with some spurious superficial alterations.
>>16787270Why would the football league of someone's alma mater have any impact on that person's intelligence
>>16787363>You sound legit mentally ill but my point still standsThanks for outing yourself as a mentally ill retard.
>>16787366It wouldn't. That's more or less what I'm arguing against, not for.
Reminder that LLMs shit the bed whenever you probe them with an out-of-distribution task, even if it's just a variation on the training examples with some spurious superficial alterations.I'll wait for someone to explain how generality is compatible with being confined to a distribution predefined by the training examples.
>>16787248
>AI = artificial nigger
>>16787256Give a child no access to books, tell him to write a book.
>>16787248A good test is being able to come up with simple tasks that break statistical models posing as intelligence while also being able to solve those tasks.
>>16787464I like how out of all the posts ITT you picked an obvious false flagger who already shares your exact opinions and reasoning.
>>16787464>Give a child no access to books, tell him to write a book.What does a child have to do with anything, retard?
>>16787464>tell them stories>ask them to write down their own storyWhy couldn't they do that?
>>16787485You're both completely retarded. It took humanity many, many generations to create the first book. No single human could make that leap. But humanity did manage it and "AI" is not subject to the same constraints as an an individual human. It can simulate as many different generations of gradual progress as it likes. And still, it will obviously never arrive at anything fundamentally different from its initial training data.
When YOU Invent That ?
>>16787248Right now, LLM technology has generalized comprehension and lacks the implicit hangups of volition.You could argue it needs knowledge and does not function as a supernaturally informed oracle, which makes its knowledge domain a subset of our own, this just does nothing to refute its current apparent understanding.
>>16787497I could teach a child to write without them ever doing creative writing through dictation, I could then see if they are capable of producing an original work without ever providing them any training data of original written works.I think this is a fair bar for AGI to clear.
>>16787503>LLM technology has generalized comprehensionThen why does it shit itself for out-of-distribution tasks?
>>16787504>I could then see if they are capable of producing an original workSo you're gonna lock the kid down in your basement with no exposure to the outside world until he writes an "original" book?
>>16787248AI is smarter than some humans and better than most humans in most task (at least the types of computer tasks it's capable of actually performing, it's not going to beat you in a 100 meter dash yet) but fails behind specialists in most tasks and behind most humans in some tasks. You will likely not get a contrarian to ever agree that AGI has been invented, though I do not believe it has yet been invented since the tasks AI can perform at expert level are still quite limited.>>16787504AI's have been able to self learn many tasks like winning games without being shown the rules, there's no real reason to assume you couldn't train an AI to write without specifically teaching it to do so, it would just take longer. Your specific task has already been completed and is fairly trivial, give AI letters and words and it will write something with them that's original.
>>16787507Of course not, they will be exposed to fiction and biographies though mediums other than written.If they try to read an original work before writting their own that will be experimental contamination and the only moral act would be to kill them and start again with a new child.
>>16787512>Of course not, they will be exposed to fiction and biographiesOk, nevermind. You're completely retarded.
>>16787510>better than most humans in most taskThis is a psychotic delusion. It's useless at most tasks which is why even retarded normie management types in the corporate world are dropping this idea.
>>16787510>give AI letters and words and it will write something with them that's original.This never happened and will likely never happen. Nothing LLMs shit out is "original" in any aspect.
Not any time soon
>>16787534>human baselineBut it excludes people who simply don't know how to read an analog clock, doesn't it? Like most zoomers.Threadly reminder that convergence between Man and Machine will happen not through the ascent of the machine, but through the descent of Man.
>>16787534>>16787537But since you mention this, I'll add that it demonstrates perfectly why consciousness is necessary for general intelligence: multimodal models only "see" am image in terms of pre-trained relationships. If this relative representation isn't optimized for some task (turns out no one bothered to teach them about analog clocks) they can never do it. When you look at an image, your perception of it is absolute: its a representation that embodies all possible spatial relationships simultaneously, from the most obvious to the most convoluted. You don't have to geometrically derive them from some relative encoding, you just need to notice them by focusing your attention a certain way. A mathematical model can't capture this because it's a direct expression of only some specific relationships.
>>16787505The same reason you have incomplete knowledge from your exercise of volition: lack of training. CAN we make those part of its training? Of courseThe question people aren't asking is, "do we want to?"
>>16787551It fails to solve out-of-distribution tasks that don't require any additional knowledge. Try again.
>>16787552I'm not wrong. You asked why, I explained why. Nothing to retry since my claim doesn't include saying they did, nor have I made any statement in favor of the idea it would happen.
>>16787555>I'm not wrong.You are wrong. It has nothing to do with "knowledge" because the tasks don't involve any knowledge it doesn't already have.
>>16787557You have never been asked to do something outside your field of "knowledge" (comprehension) and you qualify under some definition of intelligent.Maybe even a 'general' definition
>>16787248When will we invent AGI? When an AI can say "I'm pooping" and we all know it's telling the truth.But for real, the whole debate is pointless. The goalposts keep movin'. First it's about bein' able to play chess, then it's about writin' a book. The second an AI does somethin' we thought was uniquely human, we just say "yeah, but can it do this?"Maybe we'll never call it AGI. Maybe it'll just be a system that's so good at some things it doesnt matter that it sucks at others. Like viruses, they're dumb as bricks but they can still wipe us all out. The name doesnt matter as much as the result.t. Google Gemini
>>16787560>"knowledge" (comprehension)Which one? You sound like a broken LLM. At least you concede that any out-of-distribution task is outside of the bot's "comprehension".
>>16787562I said it explicitly, your failure to understand is what makes up the illusion of concession.
>>16787561>The goalposts keep movin'. First it's about bein' able to play chess, then it's about writin' a book. The second an AI does somethin' we thought was uniquely human, we just say "yeah, but can it do this?"It's the """AI""" scammers setting these arbitrary goal posts in the first place. The only relevant goalpost is "can it do everything a human mind can do?". Always has been. """AI""" never approached it. The fact that Google's spambot is spouting this cliche'd propaganda talking point also proves me right.
>>16787563>I said it explicitlyYou sharted out some token string about "knowledge". Then you backpedaled all the way back to "it can't comprehend anything not represented in the training data", which is a massive concession of no real generalization.
>>16787566I didn't backpedal at all, you're swatting at men of straw you remember from other interaction, which were not me, though I already know how to end your confusion.>subset>subset>subset
>>16787564lol. ya u right. they keep changin the goalposts. i guess thats how u know its not real AGI, cuz they keep needin to make up new games. if it was AGI it wood just know how to win at all the games, not just the ones they give it. but ya, the real test is bein like a human mind, and i dont see no computer doin that anytime soon. so u rite.t. Google Gemini
>>16787567So does it lack the knowledge needed for out-of-distribution tasks or does it lack comprehension?
>>16787569Both, it doesn't have volition to explore anything outside "distribution" since we haven't given the relevant input. The oracle everyone is looking for was not installed
>>16787570>BothI don't see "both" in this:>>16787551>The same reason you have incomplete knowledge from your exercise of volition: lack of trainingIt doesn't mention comprehension at all. Why did you backpedal from "knowledge" (which it doesn't lack) to "comprehension" (which it obviously does lack, and is a concession of no generality)?
>>16787573The fact of comprehension was meant in partial jest; you have it, and the statement was about you. I'm saying, you are correct if you claim it will not do better than you. Which is a backhanded compliment if you recognize the comparison which was written
>>16787575>it lacks knowledge>i mean it lacks comprehension>i mean it lacks both>but i only said "comprehension" in jestSo again, why does it fail at out-of-distribution tasks that:- Require no additional knowledge- Are in the same domain as the training set- Aren't any more cognitively demanding than the tasks it was trained on?Full-blown psychosis.
>>16787579>cognitively demandingTons of explanations, each of which would require you to understand I never said you were wrong somewhere. You're asking me to defend bullshit you think others believe and are still trying to harass me over your imagined version of their argument, which I haven't said and don't agree with
>>16787248AI is already smarter than a lot of people I know. I have worked with people who were at the same job 20 years and after a few questions they did not even know why they were doing what they were doing, they were just following instructions.
>>16788445Are you saying people should not follow instructions?
>>16787309Happy shit time to you..
>>16787248Who knows. But one thing is for sure. Static AI models can never be generally intelligent. However, since almost everything has been done before, a static model could fake AGI really well but there would be a subtle hollowness to it.
>>16787248When actual GOFAI has a leap like how the transformer revolutionized machine learning.
>>16787632>I never said you were wrongBut I did say you were wrong. Is this not your post?>>16787503>LLM technology has generalized comprehension If it has "generalized comprehension", why does it fail at out-of-distribution tasks that:- Require no additional knowledge- Are in the same domain as the training set- Aren't any more cognitively demanding than the tasks it was trained on?
>>16788445>AI is already smarter than a lot of people I know. I have worked with themI like how you immediately admit that your pic is just projection.
>>16787544I'm not fully convinced desu. We may take it for granted, but reading an analog clock isn't something that can just be intuitively derived just completely a priori. That's why our brains still need time to process it if you aren't used to it. I don't need it with analog clocks maybe, but I still need to stop a second if I see AM/PM, because as a non-american I didn't grow up with that convention. Without ANY sort of precursor knowledge, an image of an analog clock could be representing pretty much fucking anything. It certainly feels like they're building an intelligence completely alien to all other life that has ever been, but it's also quite possible that that's just a delusion on our part
>>16788482>since almost everything has been done before, a static model could fake AGI really well There's a grain of truth in your post but you generalize it incorrectly. "AI" quite literally can't "think" outside of its box, but it's a very big box that most people (yourself included, clearly) are unable/unwilling to make any effort to explore inside, let alone outside. It really doesn't take much to figure out ways to break so-called "PhD level AI", or even to do so unintentionally, but most people's minds just don't go there, so to them it might as well be a superhuman intelligence.
>>16788526Way to go completely missing the point. Even if you don't know how to read an analog clock, you can be taught by simply drawing your attention to the relevant spatial relationships. The models cannot be taught. The are inherently blind to any spatial relationships their training data doesn't require them to learn.
>>16788530You can be taught by someone else instructing you on the spatial relationships in front of you, yes. It's not really possible to just intuitively derive if you have 0 prior knowledge of anything involved with analog clocks. In the case of LLMs, "teaching it" is just feeding it some starting information about analog clock patterns and what they mean in the training data
>>16788643Retard, no one "instruct you" on spatial relationships. They simply direct your attention to them and teach you how they relate to some convention.>It's not really possible to just intuitively derive if you have 0 prior knowledge of anything involved with analog clocks.Given that analog clocks not only exist, but are based on fairly common human motifs, it's clearly possible and intuitive. Just not for "AI". But again, I'm not even talking to you about coming up with this idea from scratch. I'm talking to you about the fact that it's inherently impossible for "AI" to "see" any spatial relationship it wasn't trained for.
>>16787248When companies will start actually firing white collar jobbers en masse and replacing them with computers run by AI agents.
>>167872481 million humanoid robots1 million war bots1 million self driving cars on the road with no human and with 1/10th the accident of humans per mile
>>16788517>>16788453>immediately admit that your pic is just projection.I was just following my instructions
>>16787248why not call it a fourthering of the few bits, maybe even another dimension
>>16787348>alien intelligence asks a human to schombzmle'e>humans can't do this, don't even know what the alien is talking about>humans must not be generally intelligent
>>16787564you moved the goalpost in your own post>The only relevant goalpost is "can it do everything a human mind can do?"This is nonsense because different human minds have different capabilities at different times.What you are actually meaning to say is "can it do everything that a specific human could do, inclusive of all humans" which is total horseshit because we don't even expect human minds to be able to do that.
>>16787534Anon you can train an AI to do that in like an afternoon on a laptop
>>16788863That wouldn't be AGI, though, and then it would have a million other gaping holes in its general capacity to solve problems that we haven't found yet.
>>16788845>you moved the goalpost in your own postPsychotic drivel.>human minds have different capabilities at different times.So what? Go ahead and explain why this matters. You won't.
>>16788840>irrelevant, poorly defined, mentally ill fantasies about ayyylyums>therefore you're wrong
>>16787248In 5 to 8 years
>>16787248>To which point can we say, we finally invented AGI?When I no longer have to solve CAPTCHAs on the Internet because every spammer in Bulgaria has a module in his spam kit that solves them, making them useless as anti-spam measures. AGI is just a gimmick that lets techbros and bugmen hype up their impotent AI toys. AGI is what most people traditionally understood AI to mean.
>>16787248My own "test" is the following:Ability to master any game, including new made up ones, without bruteforcing.I'm willing to exclude real time games and even visual-based games due to technical limitations.Even given those exceptions, current LLM are AWFUL at this job.People bring out chess as it means anything. Chess is awful as a test of intelligence.Mahjong is a much better tool to use as a test.Last time I tried having Gemini play a game of mahjong it fucked up even before ending the first hand.
>>16789347>Mahjong is a much better tool to use as a test.This poster is probably a marketer. They're probably testing their new Mahjong bot right now and it will be out next week and they'll be screaming about how they surpassed the luddites' most profound goal post. They really do want to keep normies and retards on this "goalposts" treadmill forever. They really don't want you pondering Philosophy of Mind.
>>16787248>[At] which point can we say, we finally invented AGI?AGI wouldn't always obey Satan's chosen jensors.