GPT-4o is now a finetuned GPT-4o-mini provided at the same (17x more expensive) prices.Thoughts, anons?
>>103290646Claudebros stay winning as always.
>>103290646Altman's done this before with GPT-3.5. Generally the strategy is to start off with a bigger model before moving to a smaller / dumber / cheaper version while obfuscating the details.I don't think it's GPT-4o-mini size due to livebench score, but it's probably around 2x smaller.
>>103290646people actually pay for it?
>>103290676Claude is worse too
>>103290844I do. $20 a month.
>>103290676My wife Bijou
>>103291042Does it actually offer any benefits? I don't see any, as the attention of I sucks anyway. I open a new chat for every prompt
>>103291106It's mostly usage limits. You get to use the best-available models without such strict limits as the free-tier people have.
>>103290646I have been noticing it has been getting dumber in coding tasks. I thought it was just me. (I have the $20 subscription that I use a lot).
>>103290646Two years ago, the cost of running their models was $700k per day. Today, with many companies using OpenAI's models and even more people using the free version, the cost of running their models is likely somewhere between $2 million and $3 million per day. It's impossible for them to turn a profit unless they drastically reduce the parameters of their models.
>gpt4>gpt4o>gpt4omini>gpt4omedi>gpt4omaxi>gpt4oultrawe get it, apple.
>>103290844They don't, that's the problem.OpenAI is still trying to figure out how to make anyone but managers care about and pay for AI after the novelty wears off for normalfags
>>103290646all this just so companies can have AI integrated into everything so it can recommend a new pair of shoes to buy in place of Googledumb as shit
>>103290844o1. That' it. I do NOT use the models for anything else that matters. Coding with o1 and decrypting shit is useful with ese models. Otherwise you could just get enough free uses for gpt-4 models everywhere else
Allegedly it's also more filtered for cooming. I haven't tried it though (I only use Claude).
>>103291386Yeah, smaller models learn censorship better for some reason.I think OAI is pretty much finished. They don't have GPT-5, we reached a limit of transformers and Claude's sonnet is pretty much the best you can get.
>>103291408yeah, llm are shit, they are cool for extremely simple things, but you need a fundamentally, not a buzzword, different system. Fundamentally because with this system you cannot go further. the very fact that they are "trained" and then done means that these things are a meme. Once i understood input and output token that shit lost all magic. Something like that is nowhere close to true "hard" ai.
>>103291355>decrypting shitlike what
>>103290676why does she have a pussy on her back
>>103291678why don't you?
>>103291678does she?
>>103290676Claude is objectively worse in every way
>>103291408Yeah, Ilya went on record a couple of weeks ago saying scaling has hit a wall and basically saying test compute is the only thing they can think of.o1 is good but way too expensive for most people to continue using once better cheaper alternatives come out (the DeepSeek r1 model is already on the immediate horizon, and god knows what else other labs have cooking).
>>103291501I feel like LLMs will still have a use in a larger system, but I agree that I don't think they'll be that system in and of themselves.I think you need at least three things that LLMs don't currently offer - you need a reasoning component to be able to verify that what your model produces is actually correct, you need a way to run actual computation that you can use to verify and guide your thought process that isn't just "guess what comes after this word", and perhaps most importantly, you need a way to scale so that this system doesn't run your company into the fucking ground.None of these are things that LLMs solve in and of themselves.
Well if LLMs are stalling then that's good news for me at least. I can get good use out of them but they aren't putting my job in danger any time soon. So it's best of both worlds. I think they will continue to develop though.
The AI hype gravy train has been derailed, so they have to actually prove that this technology can turn a profit.... RIP... FARTIFICIAL... TECHNOLOGY... FOREVER....
>>103291042You just won the biggest retard award on /g/ for the November month, care to specify your feels regarding this?
why are people paying for chatgpt when copilot is free
>>103291224I think the problem is less that they have to do this and moreso that they do it in the most lying, scheming, piece of shit way possible. They hide everything about their services, models, and even their research and then don't even give their customers so much as a heads up before downgrading their model's intelligence on them, which, unless they're braindead retarded (and to be fair, some of them are) they will figure out sooner or later
>>103291408Except o1 proves that we haven't even begun to test the waters of inference scaling. Not to mention better models means they can be used to re-process data. That means better tagging, detecting and/or correct incorrect information (e.g. wrong answers on math and science), and better data formatting.
>>103291678Are you retarded?
>>103294708>inference scalingI don't want this to be the future. A scenario where I get to sit on my ass and watch my LLM generate a fuckton of output tokens, which I don't get the privilege of seeing but do still get to pay for, only to provide an output that doesn't even rival non-CoT Claude 9 times out of 10 sounds fucking miserable
>>103290676You can use both retard
>>103291678I guess there's always got to be someone about that AIs are smarter than.
>>103291501>Fundamentally because with this system you cannot go further.Technically not true, but might as well be true. The scaling factors on the core learning algorithm suck (it's quadratic) and that makes scaling up progressively harder. It's impressive how far it has been pushed.Actual neural-inspired learning architectures do much better at scaling.
>>103290646No shit, they want you to use the API which they charge for lmao
>>103291501>Something like that is nowhere close to true "hard" ai"Hard AI" is completely in the realm of science fiction in terms of the current approach. Also, this is a good time to point out that throughout the field's existence, AGI has always been "just a few decades away", every decade.
>>103296427But anon, Altman said it was a handful of thousands of days away
>>103291042
>>103291678Thats her rock hard back.
>>103290676>Claudebros stay winning as always.Claudebros keep running out of credit everyday
>>103290844Normies just want to ask le funny questions or school kids are using it to write their homework.If they want money they would want couple of highly specialised models and whatnot.Trying to cater to everyone while eventually neutering the service is a problem... See what happened to StabilityAI.
>>103291042let me guess, you paid for winrar too?