[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: 1732403273894744.png (43 KB, 628x819)
43 KB
43 KB PNG
GPT-4o is now a finetuned GPT-4o-mini provided at the same (17x more expensive) prices.
Thoughts, anons?
>>
File: 1727583432665509.png (2.7 MB, 2493x3500)
2.7 MB
2.7 MB PNG
>>103290646
Claudebros stay winning as always.
>>
>>103290646
Altman's done this before with GPT-3.5. Generally the strategy is to start off with a bigger model before moving to a smaller / dumber / cheaper version while obfuscating the details.
I don't think it's GPT-4o-mini size due to livebench score, but it's probably around 2x smaller.
>>
>>103290646
people actually pay for it?
>>
>>103290676
Claude is worse too
>>
>>103290844
I do. $20 a month.
>>
>>103290676
My wife Bijou
>>
>>103291042
Does it actually offer any benefits? I don't see any, as the attention of I sucks anyway. I open a new chat for every prompt
>>
>>103291106
It's mostly usage limits. You get to use the best-available models without such strict limits as the free-tier people have.
>>
>>103290646
I have been noticing it has been getting dumber in coding tasks. I thought it was just me. (I have the $20 subscription that I use a lot).
>>
>>103290646
Two years ago, the cost of running their models was $700k per day. Today, with many companies using OpenAI's models and even more people using the free version, the cost of running their models is likely somewhere between $2 million and $3 million per day. It's impossible for them to turn a profit unless they drastically reduce the parameters of their models.
>>
>gpt4
>gpt4o
>gpt4omini
>gpt4omedi
>gpt4omaxi
>gpt4oultra
we get it, apple.
>>
>>103290844
They don't, that's the problem.
OpenAI is still trying to figure out how to make anyone but managers care about and pay for AI after the novelty wears off for normalfags
>>
>>103290646
all this just so companies can have AI integrated into everything so it can recommend a new pair of shoes to buy in place of Google

dumb as shit
>>
File: sex.png (2.96 MB, 1280x1856)
2.96 MB
2.96 MB PNG
>>103290844
o1. That' it. I do NOT use the models for anything else that matters. Coding with o1 and decrypting shit is useful with ese models. Otherwise you could just get enough free uses for gpt-4 models everywhere else
>>
Allegedly it's also more filtered for cooming. I haven't tried it though (I only use Claude).
>>
>>103291386
Yeah, smaller models learn censorship better for some reason.
I think OAI is pretty much finished. They don't have GPT-5, we reached a limit of transformers and Claude's sonnet is pretty much the best you can get.
>>
File: 122161730_p0.jpg (505 KB, 1500x2419)
505 KB
505 KB JPG
>>103291408
yeah, llm are shit, they are cool for extremely simple things, but you need a fundamentally, not a buzzword, different system. Fundamentally because with this system you cannot go further. the very fact that they are "trained" and then done means that these things are a meme. Once i understood input and output token that shit lost all magic. Something like that is nowhere close to true "hard" ai.
>>
>>103291355
>decrypting shit
like what
>>
>>103290676
why does she have a pussy on her back
>>
>>103291678
why don't you?
>>
>>103291678
does she?
>>
>>103290676
Claude is objectively worse in every way
>>
>>103291408
Yeah, Ilya went on record a couple of weeks ago saying scaling has hit a wall and basically saying test compute is the only thing they can think of.
o1 is good but way too expensive for most people to continue using once better cheaper alternatives come out (the DeepSeek r1 model is already on the immediate horizon, and god knows what else other labs have cooking).
>>
>>103291501
I feel like LLMs will still have a use in a larger system, but I agree that I don't think they'll be that system in and of themselves.
I think you need at least three things that LLMs don't currently offer - you need a reasoning component to be able to verify that what your model produces is actually correct, you need a way to run actual computation that you can use to verify and guide your thought process that isn't just "guess what comes after this word", and perhaps most importantly, you need a way to scale so that this system doesn't run your company into the fucking ground.
None of these are things that LLMs solve in and of themselves.
>>
Well if LLMs are stalling then that's good news for me at least. I can get good use out of them but they aren't putting my job in danger any time soon. So it's best of both worlds. I think they will continue to develop though.
>>
The AI hype gravy train has been derailed, so they have to actually prove that this technology can turn a profit.... RIP... FARTIFICIAL... TECHNOLOGY... FOREVER....
>>
>>103291042
You just won the biggest retard award on /g/ for the November month, care to specify your feels regarding this?
>>
why are people paying for chatgpt when copilot is free
>>
>>103291224
I think the problem is less that they have to do this and moreso that they do it in the most lying, scheming, piece of shit way possible. They hide everything about their services, models, and even their research and then don't even give their customers so much as a heads up before downgrading their model's intelligence on them, which, unless they're braindead retarded (and to be fair, some of them are) they will figure out sooner or later
>>
>>103291408
Except o1 proves that we haven't even begun to test the waters of inference scaling. Not to mention better models means they can be used to re-process data. That means better tagging, detecting and/or correct incorrect information (e.g. wrong answers on math and science), and better data formatting.
>>
>>103291678
Are you retarded?
>>
>>103294708
>inference scaling
I don't want this to be the future. A scenario where I get to sit on my ass and watch my LLM generate a fuckton of output tokens, which I don't get the privilege of seeing but do still get to pay for, only to provide an output that doesn't even rival non-CoT Claude 9 times out of 10 sounds fucking miserable
>>
>>103290676
You can use both retard
>>
>>103291678
I guess there's always got to be someone about that AIs are smarter than.
>>
>>103291501
>Fundamentally because with this system you cannot go further.
Technically not true, but might as well be true. The scaling factors on the core learning algorithm suck (it's quadratic) and that makes scaling up progressively harder. It's impressive how far it has been pushed.
Actual neural-inspired learning architectures do much better at scaling.
>>
>>103290646
No shit, they want you to use the API which they charge for lmao
>>
>>103291501
>Something like that is nowhere close to true "hard" ai
"Hard AI" is completely in the realm of science fiction in terms of the current approach. Also, this is a good time to point out that throughout the field's existence, AGI has always been "just a few decades away", every decade.
>>
>>103296427
But anon, Altman said it was a handful of thousands of days away
>>
File: cca.gif (125 KB, 720x480)
125 KB
125 KB GIF
>>103291042
>>
>>103291678
Thats her rock hard back.
>>
>>103290676
>Claudebros stay winning as always.
Claudebros keep running out of credit everyday
>>
>>103290844
Normies just want to ask le funny questions or school kids are using it to write their homework.
If they want money they would want couple of highly specialised models and whatnot.
Trying to cater to everyone while eventually neutering the service is a problem... See what happened to StabilityAI.
>>
>>103291042
let me guess, you paid for winrar too?



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.