[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: 1352163512731.png (585 KB, 1169x1036)
585 KB
585 KB PNG
First it was OAI with Sora, now Anthrophic.
It's unironically over for AIfags.
>>
So? Get the bigger subscriptions. Are you poor?
>>
>>108515884
There is no bigger than $200, anon.
>>
>>108515872
I'm so fucking glad this shit is happening. As a human filter for AI workshop I wanted all those retards to be in withdrawals after being so fucking retarded
>>
oh no, there's too much demand for a company's product
how awful for them
they're only 10x their revenue in 3 months again
>>
>>108515951
They're losing money on every single subscription
>>
I don't understand why people don't care about this sort of centralization of power. Imagine tying your ability to produce in such a fundamental level to a VC fueled service.
>>
>>108515957
it's a good thing they reduced usage limits and force businesses to buy tokens then right?
>>
>>108515951
no, they are charging way more tokens for the same operations than some weeks ago
>>
>>108515971
Anon, the vast majority of people are absolute NPC mouthbreathers. The elites are completely right about them.
>>
>>108515957
They aren't losing money.
Opus 4.6 is roughly the same size as Kimi 2.5 in terms of parameter count, yet Kimi 2.5 is around 10-15 times cheaper than Opus 4.6.
Inference cost is not the issue. The real problem is that Claude does not have enough GPUs to meet demand, and its research, so they are making shit up now.
>>
>>108515872
You need a ration card to write code?
>>
>>108516001
Yes. Some of us are mentally restrained and we cannot just "learn to code".
>>
File: 1000005032.png (194 KB, 508x492)
194 KB
194 KB PNG
>>108515889
You can always just pay for their API per Mtok.
>>
>>108515951
They are serving CC at loss.
>>
>>108515986
They are losing money.
They have to pay for the training. They do'.t only have operational costs.

When you rent a car, do you think that the rental company only has to pay for the maintenance of the car and doesn't have to purchase the car in the first place?
Someones gotta pay for that.
>>
>>108516012
>20 tokens per second with 90% uptime
yeah...
>>
>>108515872
completly unsustainable

-Quality of the work is questionable
-How long hardware can live? 5 years?
-To make it worhwhile and foolproof you need way more tokens than they thought initialy.
-Still need some top-biomonkeys to check pulse of processes, cant fire everyone
-Chinks throw free versions, so you cant just monopoliticaly rise your prices

US ai companies are fucked, mb google is sustainable because they have their own money and not only investors, but dont know numbers.
>>
>>108516031
inference is profitable
the training itself is also technically 'profitable' because the model generates more revenue than its training cost
the losses are coming from the up front investments in datacenters
but investors are happy to pony the money up for more datacenters because the demand for tokens is real and insatiable
demand will continue to outpace compute supply for at least a decade
gpu shelf life seems pretty good because h100s from early 2023 are making more money now than they ever have - they'll probably be in service for another 2 years at least
>>
>>108515872
Anthrophic makes good models, but when it comes to inference they always had an issue. Their models are painfully slow compare to OAI's models for example. And their servers suck balls.
Claude code sometimes takes 30+ minutes to handle 200k+ tokens overall back and forth. Which is absolutely terrible.
Compare to codex it is night and day.
>>
File: PitBrainlet.png (35 KB, 211x239)
35 KB
35 KB PNG
>>108516072
>inference is profitable
>the training itself is also technically 'profitable'
>>
Btw, huang already jumped off the investing-rollercoaster.
microslop removing copilot functions
claudecode went opensource without concest

arent these - riders of the bublecalypse?
>>
File: HCkJ7yNWEAAE0fx.png (307 KB, 2480x2268)
307 KB
307 KB PNG
>>108515872
Who cares?
The core of their business is enterprise API calls.
Subscriptions are a loss of money for them.
>>
>>108515986
>Opus 4.6 is roughly the same size as Kimi 2.5
Source?
>>108516020
>They are serving CC at loss.
Source?
>>
>>108515872
Oh look, its the enshitification I keep talking about.
These companies have to staunch the bleeding and controlling token usage for free tiers is one such way.
>>
>>108515872
what are you gonna do about it? if you don't want to pay use something else. if you keep giving them more money they'll just keep raising prices to find your limit. this is the monetization phase of AI.
>>
>>108516001
Yep, they need to beg dario for good boy tokens.
>>
>>108516020
Only if you count in their own bs pricing. Their API is overpriced. But the reason they always try to charge more is because they have to cover the expenses and research, data loicensing and such.
That's partially why chink inference is cheaper. They just steal shit and don't have to cover anything buy hardware costs. They are likely making good money on that.
>>
File: 1774438131476503.png (1.25 MB, 1024x768)
1.25 MB
1.25 MB PNG
>>108516411

Does this chart not include Gemini? Or is its marketshare really too small to count?
>>
>>108516110
That is because they are de facto #1 when it comes to vibecoding bs. Others are barely catching up, mostly jut chasing after graphs, not real performance.
>>
>>108516570
>Does this chart not include Gemini?
Yeah, it wasn't included but there are other studies that place Google at third place, closing the gap with OpenAI at ~20%



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.