We are literally half way until AGI.Its over.
>>107257097yeah bro just one more model bro we just need another $500b in datacenters bro just one more and then we have AGI
AGI in 2 more benchmarkssurely we'll run out of tests at some point
it's gonna be like when everything was able to pas the turing test... we'll just say "oh, guess that didn't matter" then make a new test
>>107257101https://www.youtube.com/watch?v=UH2_Sgeu4lc&t=286sI can't find the cat but gemini could. ai is already better than me. it's over.
>>107257097>muh finding dumb unwritten rules well means it understands curiosity and funit just means that compared to last time some more data was added to traverse stupid coy games without stated rules. More will be added next time and number will go up and it will be no closer to generalizing intelligence. Probably makes the model dumber having to ingest IQ based "fun" games usable nowhere else. All that makes it a decel benchmark for me.
Please stop falling for investor bait.
>>107257166>he thinks "agi" means "it will understand curiosity and fun"ahahaha
>>107257097>gpt5:17%sam would've have agi by now if he have 10$ trillion
>>107257309Maybe if we just called the CounterStrike 1.6 aimbot leaderboard AGI-AIM we'd have AGI by nowListen to the latest debate. François truly thinks virgin gameplay = curiosity = efficient learning = general intelligence. I think the idea of a self-contained AGI means nothing because "it" will never have a self to care about.
>>107257097It took how many trillions of dollars to not even get half way to AGI and to run out of training data entirely?
>>107257475You can always just generate more training data yourself. Chess bots don't train on human games either because humans fucking suck at chess.
>>107257097Is mayonnaise an AGI
>input more data to the machine which also contains more stolen visual puzzle solutions>machine magically gets betterIt's NEVER going to produce information which wasn't copied from the source data. It's just not how computers work.
Bullshit graph is bullshit.
>>107257596Yes Patrick
>>107257100shake your ass for me you little clanker slut
>>107257097>two more models!
>>107257649t.b.h google at least makes the money for them fairly by selling your data
This criticism seems crazy, considering LLM's weren't even on anyone's mind two years ago. This shit has improved so much in such a short time.I remember the eery LSD looking images created by AI a few years ago and how everyone thought this would never go anywhere and now AI generated pictures and videos easily fool my boomer parents.
>>107257670They've already scraped the entire internet for their models. What are they gonna steal next, put some kind of always-on screengrabbers on your operating system that stream everything you do to their model training center?
>>107257097>Five years later>20% on ARC-AGI-6 is insane dude!
>>107257670Using trillions in 2 years is not a good flex tho.
>>107257731You want it to happen faster and cheaper? That's your criticism, really?
>>107257687Probably use the stuff people feed into it willingly
>>107257687they had already done so 5 years ago. the models continue improving
>>107257097tools off!tools on!
>>107257166>it will be no closer to generalizing intelligenceCan you define intelligence for the class please?
>>107257097buyanad
>just two more data centers bro
>>107257614found the retard
>>107258098>the markov chain sentence autocomplete will magically become intelligent because...uhm...we put enough money in it!
>>107257097>Just eleventy billion more...errr I mean twelvety-twelve trillion, and we'll be there in 2 more weeks.
>>107257097Tumor weeks
>>107257097>another gamed benchmarkimagine unironically falling for this in 2000 + 20 + 5
>>107257475That training data issue should be one of the signs it's not the method that should be used, and that something better is needed.
I can't believe we're only one year away from AGI...
>>107257915relative ability to resolve genuine novel classes of problems impactfully
>>107257915Intelligence is how efficiently and precisely you can generalize a new piece of information. Gradient descend is great at learning heuristics but it sucks at logical generalization.
>>107258580define impactfully and novel and how it is determined.
>>107258105Yeah, I would totally agree with you except for the little fact that exactly that has already happened before our eyes. It surprised me too, but at some point you've got to pull your head out of the sand.
>>107258673novel - judged by someone familiar with their familiarity (out of distribution)impactfully - judged by someone impacted by their assessment or actionwhat's your point? my original point is that general intelligence may be a thing we already have a little of in LLMs if you squint but we will never have a thing that embodies GI totally to our standard - it's going to lack positive impact around the edges we hold dear. Most people achieve impact by reciprocating fairly, short-lived digital files can never obtain that level of trust.
>>107258992>novel - judged by an expert>impactful - judged by the impactedSo intelligence is just subjective, in it's entirety. My watch that I created tells me the time when I ask. I judge it to be impactful for telling me the time. My watch is now intelligent.
>>107257670>This shit has improved so much in such a short time.sandbagging isn't genuine improvement.>>107259023telling the time is not a genuine novel problem being resolved. it might be an act of intelligence to create the watch but not every output of the watch counts as intelligence. Intelligence can only be spotted by humans, sure. Do you think anything else can spot intelligence? beware the chatbot sycophancy telling you "your brilliant insight"
>>107259230name something concrete that you think an AI won't be able to do within the next 3 years
>>107257097Only open source models entered as Kaggle notebooks are tested with the real private test set.These results are from the semi-private test set, which Google never try to fish from their query logs, scouts honour.
Mind you this is also only indicative of bullshit tests they made. We'll never know but I'm willing to bet they're nowhere near 45% to AGI.
>>107257097>make up puzzles to test LLM capabilites>train the LLM on those puzzles>omg AGI sooonNext level cope desu.
>>107257097AGI nowadays means you've managed to create something that can act independently with the intelligence of the average human. Given the inbred retardation in three quarters of the world, that's actually not much of a challenge anymore.What used to be called AGI is now called ASI, Artifical Super Intelligence - wherein it actually can make all-knowing, always correct decisions at super speed.The industry's spindoctors redefining and reframing AGI was a cope meant to deal with the fact that actual AGI was not attainable with LLM technology being a dead end. They're just hoping nobody will notice what they did. Too bad for them- people are beginning to notice.To the point even, that normies are starting waves of public outcry against Microsoft and their incessant incestuous Copilot faggotry, breeding it into every other product or service they offer - not the last of which is Windows itself.