How will openAI recover from this?
>>16851788Use case?
Why does someone just not open up their image generator to make nudes and make a gorillion dollars?
>>16851788by releasing a better model, duh.
>>16851807Oh, you mean like 5.1? :-)
>>16851788It's all the same shit, it's been the same for like 3 years lol
>>16851788Interesting progress, especially on arc-2, though I saw the guys behind it post that it weirdly fails some easy arc-1 puzzles. Also apparently it has similar hallucination rate as 2.5, which was kind of high. Hope to try it myself to see.
Do you agree on Gemini 3's definition of mathematics?
>>16851788Bar graph department in freefall as AI makes new bar graph.
>>16851838Not bad. Who wrote it first?
>>16851835It's still number 1 on arc-1 leaderboard for significantly less cost than o3. As far as I'm concerned this marks the point where google overtake the competition and they never catch up after that. After all GPT-3 was built on an idea stolen from Google in the first place.
Find a flaw in its logic. Pro tip: you can't.
Sigh... Maybe Gemini 4 will actually be able to read and understand simple questions. But I'm starting to think that these current LLMs are just a dead end.
>>16851838I agree with whoever it was that originally wrote that definition. AI isn't creative, it's purely derivative and always will be you dumb fuck.
>>16851856But what it said was correct. The hint is right there in what it said. We need to go beyond the standard system of real numbers used in almost all of science, engineering, and mathematics.Which is true. Shit is fucking stagnating. Science is so fucking beaurocratic now, it's disgustingly institutionalized. Papers get published not because of rigor but because of politics and shit. It's all fucked and everyone knows it.
>>16851862You're just as fucking stupid as the AI. Read the fucking question again.
>>16851843>>16851858Everyone knows it. Some are only pretending to not see it.
>>16851862>We need to go beyond the standard system of real numbers used in almost all of science, engineering, and mathematics. But that's why we're building the AI in the first place. To go beyond our human abilities. Right?
>>16851856>Prove the 0.999 =/= 1Maybe try asking a question that makes sense lmao. Did you mean "Prove THAT 0.999 =/= 1? It probably just thought you were retarded and forgot to add the "..." too.
>>16851862>humans game systemsWell, duh.
>>16851873It can't even properly collect resources for easy access and display. I asked it some simple questions a few times. It answered and when I asked "what about x missing" off the top of my head it answered "yeah, you are absolutely right, there is also ...."AI my ass. Shit tier scam
>>16851876>It probably just thought you were retardedIt was almost certainly trained on the gazillions of /sci/ posts pn this very subject. It should be able to entertain monkeys for nearly 15 minutes with its quality discourse and shit slinging.
>>16851879That's a great story to tell to your kids.
>>16851880Do you think that it can memorize all the data on the internet? This is what you "it just regurgitates information" faggots fail to understand. It builds a model from the data that can generalize to new data, this is EXACTLY what humans do when they learn.
>>16851807when they released gpt 5, they said they wouldn't release cutting edge models anymore because they just can't afford them. they'll just release affordable models
>>16851850>Lizard brain>Thigh highsDidn't know those exists prehuman.
>>16851897Data doesn't meant what you think it means. It does not generalize. it interpolates. You're a mouthbreather. It does not work in any way similar to a brain. Brains do not back propagate, for one. For another information is not stored in the brain. >But muh heckin memory!Nope.
>>16851856A dead end for what? I don’t think many of the people in this thread have put in much time with an LLM. I’m not going to tell you exactly what I do with LLMs, but it does incorporate layers of personal modification. At this point the sperge will yell that LLMs can’t learn between trainings. That’s true. But you can impose rules, behavioral parameters, protocols, macros, etc., in persistent memory, across projects and threads, and this can create a powerful tool. It isn’t learning, it’s compliance by way of governance constraints, but it works, and it works to your specification, even down to the selection of epistemic stances (I like a sparring partner who ruthlessly calls out my bullshit). Once you start down that path there’s no going back to default. You create a very efficient and sometimes very rewarding work flow. But hey, it’s a prediction engine, not a magic 8 ball, sometimes it still goes fractal.
>>16851814Yes, it's better. I can actually ask GPT 5.1-high questions about specifics in Gaitsgory's papers and get somewhat sensible answers while gemini-3-pro just shits out vague, short wordslop. I mean, you can try it yourself: take a closed genus-2 surface [math]\Sigma_2[/math], fix a connected reductive complex group [math]\hat{G}[/math] (Langlands dual of some G), and ask both gpt-5.1-high and gemini-3-pro to calculate [math]\operatorname{Loc}_{\hat{G} }(\Sigma_2)[/math].
>>16851960>Data doesn't meant what you think it means. It does not generalize. it interpolatesObviously you never studied linear regression or machine learning >It does not work in any way similar to a brain. Brains do not back propagate, for one.It's literally a neural network that updates itself incrementally based on the data it sees, the exact details and substrates used to achieve this is irrelevant >For another information is not stored in the brainLmao now I know for certain you're a brainlet
>>16852037>linear regression or machine learningThe fact you combine these two tells me all I need to know. Once the bubble bursts your tears will be saltier than the oceans.
>>16851897Naw. Nope. No.
>>16851788by getting more money from Nvidia, and buying more data centers from Oracle who will purchase more GPUs from Nvidia
November 17, 2025Grok 4.1 is now available to all users on grok.com, X, and the iOS and Android apps. It is rolling out immediately in Auto mode and can be selected explicitly as "Grok 4.1" in the model picker.We are excited to introduce Grok 4.1, which brings significant improvements to the real-world usability of Grok. Our 4.1 model is exceptionally capable in creative, emotional, and collaborative interactions. It is more perceptive to nuanced intent, compelling to speak with, and coherent in personality, while fully retaining the razor-sharp intelligence and reliability of its predecessors. To achieve this, we used the same large scale reinforcement learning infrastructure that powered Grok 4 and applied it to optimize the style, personality, helpfulness, and alignment of the model. In order to optimize these non-verifiable reward signals, we developed new methods that let us use frontier agentic reasoning models as reward models to autonomously evaluate and iterate on responses at scale.https://x.ai/news/grok-4-1