[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: 96hn_2k.jpg (110 KB, 1080x484)
110 KB
110 KB JPG
China has finally entered the graphics card race. NVIDIA sells cards with the same specs for $10k. Is NVIDIA finished?
>>
File: 1737407733422493.png (280 KB, 945x512)
280 KB
280 KB PNG
>>
File: 1733059163926260.png (11 KB, 811x149)
11 KB
11 KB PNG
>>
>>106436296
Model size?
>>
>>106436159
>Is NVIDIA finished?
what's the driver support for the Huawei card like?
>>
>>106436159
>Memory: LPDDR4X
souds slow
>>
imagine the difference a couple of decades will make in the gpu market
>>
>>106436159
Is this an official listing from Huawei or is this some third-party seller?
>>
>>106436364
Fine for LLMs.
>>
>>106436159
No active cooling? Probably just a toy.
>>
>>106436296
>>106436407
amd 395+ has 196gb vram and is 2.2x than a 4090 while using 87% less power.

Its over for Nvidia
>>
It's going to take at least 10 years for the Chinese to develop their software. I wouldn't bother with it unless the price was like a fraction of Nvidia
>>
I heard chinese ai companies tried huawei but ditched them for nvidia again. Big vram number isnt the whole story
>>
The more competition the better, but westoids will seethe anyway
>>
>>106436309
you weren't suppose to mention that... westoids...
>>
>>106436432
a lot of people are vendor locked to cuda. using zluda gets you sued by nvidia. huawei and amd dont have anything as robust as cuda yet. their alternatives only work on a fraction of their products while cuda is designed to work on anything with a cuda core from laptop, to desktop, to datacenter.
>>
>>106436500
Imagine thinking chinks (and major chink corporations at that) are even remotely worried about being sued by Nvidia.
>>
>>106436500
>huawei and amd dont have anything as robust as cuda yet.
AMD next design, UDNA, is supposedly coming to challenge that, unifying everything. I even abandoned getting a 9000 series to wait for it given my 6600 is still serviceable, AI workloads notwithstanding.
>>
>>106436407
LPDDR4X is trash compared to GDDR7 which is trash compared to HBM3E
we're talking about scraps from the bottom of the barrel. they don't want you to have fast local ai
>>
>>106436159
I want a 10 usd GPU for LLM.
>>
>>106436432
>I heard
nice source shlomo
>>
File: it_begins_by.jpg (15 KB, 223x262)
15 KB
15 KB JPG
>>106436159
the future is chinese i told you
>>
>>106436500
>using zluda gets you sued by nvidia
just operate your business outside of globohomo controlled countries
>>
Genuinely excited for affordable chinese gpus to hit the market. Might take a few years but even if they end up flawed they will hopefully force amd and nvidia to stop fucking around.
>>
bing chilling
>>
Probably using old as fuck ram because the NPUs are tiny 8W chips (it has 4)
>>
>>106436374
It's not time that's the issue, it's a monopoly. Asians love monopoloies.Just look at the way they got together to raise the prices of lcd tv's.
>>
>NPU not GPU
>so bad DeepSeek had to abandon even trying to use them
>dumped onto Aliexpress for less than the VRAM chips cost
kys dumb wumao
>>
>>106436419
>10 years
no it doesn't, all they need to do is put LLMs to work and verify it against automated test suite
>>
>>106437991
>NPU not GPU
you don't get it do you, doing NPU they don't give a shit about all the gpu compatibility legacy shit, one could argue intel could move much faster with their gpus if they didn't do a gpu just gone straight to npu
>>
>>106437991
>>so bad DeepSeek had to abandon even trying to use them
isn't that just first gen? there are always some kinks to hammer out. those are certainly good enough for hobbyists
>>
>the ram is more expensive
~250 dollars in ram. (ie MT53E2G32D4DE-046 on AX)
>>
>>106436432
>I heard chinese ai companies tried huawei but ditched them for nvidia again
i would soiquote you if i wasn't permabanned from posting images
how's the weather in tel aviv?
>>
>>106436500
>a lot of people are vendor locked to cuda
and this is why I've hated cuda from the beginning.
>>
>>106436159
>fake specs
>fake drivers
based!
>>
>>106438270
I wouldn't expect Huawei to post fake numbers, although I would expect a Chinese reseller to do that.

But after you mentioned it I visited their site, because vram is one thing but performance is another.
I'm not sure if I'm surprised or not but it mentions nothing about performance such as flops or anything of the sort. https://support.huawei.com/enterprise/en/doc/EDOC1100285916/181ae99a/specifications
>>
>>106436329
I wouldn't be surprised if it had better loonix support
>>
File: elite-capital.jpg (618 KB, 1894x2646)
618 KB
618 KB JPG
>brigade of jeets is here to tell us china can't make software
pretending tiktok, genshin, and every other sorta impressive western face app/program doesn't exist only works if you're a 76 iq jeet who eats cow dung cakes. china has their own exclusive ecosystem of programs all natively developed and used by themselves which make western enshittifed trash look like a joke. the reason why nvidia/amd have software issues is cause they hire jeets. it's like how boeing has planes dropping out of the sky cause they hire jeets.

this will be mass produced and sold to corporations in china by the end of 2026. by 2027 it will be hitting consumers in china. the only question is if they'll sell it internationally in any real supply which is difficult to say cause their own demand will be high and they don't want to give any to jew controlled nations
>>
>>106438367
>Ascend 320
>16 TOPS@INT8
>8 TFLOPS@FP16
>8 W of power consumption.
>>
>>106438420
PS. the card has multiple Ascend 320, x2 (?) for this one and x4 for another card.
It's a low power accelerator.
>>
>>106436159
>with the same specs
But are they really? Does this thing actually perform the same? Are its drivers as compatible as NVIDIA's? Show some benchmarks.
>>
>>106436416
>Its over for Nvidia
Now they'll have to stop going overboard with the pricing. Finally.
>>
>>106436440
>but westoids will seethe anyway
*brainrotted incels
>>
>>106438489
I, for once, believe the beatings will continue
>>
If software support for it arrives then it will start to become a game changer
>>
>>106436159
as long as it doesn't have any software support its useless. Same problem as with AMD their software stack fucking sucks their docs are terrible and whatever tools they have are buggy or only work with loads of finetuning. Every last piece of chinese hardware has this issue idk when they will fix it. I would have thought that a commie government would force them to be more open but instead you get AI generated bullshit.
>>
>>106436159
>>106436364
It's only 204 GB/s
Might as well use system RAM
Too slow for AI
>>
>>106436419
stealing code doesn't take that much
>>
File: 1754536757579019.jpg (99 KB, 800x824)
99 KB
99 KB JPG
>>106436159
>trusting Chinese components
>>
>>106436382
https://e.huawei.com/cn/products/computing/ascend/atlas-300i-duo
https://e.huawei.com/cn/products/computing/ascend/atlas-300v-pro
>>
it's trash
https://www.artificialintelligence-news.com/news/deepseek-reverts-nvidia-r2-model-huawei-ai-chip-fails/
>>
>>106436159
>>106441170
are these GPUs? why are they marketed for IA?
>>
>>106442442
>are these GPUs?
yes
>why are they marketed for IA?
I don't know
>>
File: maxresdefault.jpg (88 KB, 1280x720)
88 KB
88 KB JPG
>>106436432
>I heard
You... heard? Are you sure you didn't make all that up?
>>
>>106436159
What can it be used for? Anything beyond a chatbot?
>>
>>106436419
They can vibe-dogfood it.
>>
>>106436416
>AI optimized card better optimized for AI than gaming optimized card.
More news at 11
>>
>>106442442
no one care about gpus outside of gamers (2% of nvidia revenues)
>>
>>106436416
>amd
lmao. when the ai frenzy will be over, amd stock will halve again.
>>
>>106442449
>GPU
No, it's an NPU, it can't do FP32 and it isn't compatible with graphics APIs. It is for 'large' models inference, faster than a cpu but only as fast as a 5050 for AI, the advantage is the ram size.
>>
>>106442797
Ah, thanks for the clarification
>>
File: 1746983862366915.gif (10 KB, 468x60)
10 KB
10 KB GIF
>>106441128
Surely China will stay behind forever, as they are obviously racial incapable. Oh wait, TSMC is Chinese.

Same was said with Korean and Japan though.
>>
>>106436159
This is a threat to our democracy!! we need more tariffs, orange man
>>
>>106442972
Has anything really become more expensive lately because of this tariffs program?
I dont live in US but it sounds like a shitlib whining
>>
>>106442948
Thankfully they slid into totalitarian dictatorship, which historically haven't performed that well in capital allocation.
>>
File: a100.png (386 KB, 1425x677)
386 KB
386 KB PNG
>>106436412
ah yes, the a100 is also a toy
>>
>>106436412
based retard
>>
>>106436159
>3000$ card
its half of an H100, let me guess it has 1/4 the performance
>>
>>106444570
Post where are selling H100s for $6000
>>
>>106436416
Shit metric because the 70B model can't fit inside the 4090 VRAM so it's partiall running on probably the slowest DDR4 RAM they could get their hands on.
2.2x 2t/s is just 4.4t/s. Both are useless but the 4090 will run a 32B model at 50t/s and the AMD cpu shit will barely get 5.
AMD will win once they put 12 memory controllers in consumer desktop cpus. They'll never do that though.
>>
>>106436419
you know they became competitive with the west since deepseek r1, right? still lagging behind on hardware for like 3 or 4 years but that's a far cry from the 10+ years they were at 8 years ago
>>106436432
nvidia straight up cancelled a lot of production since chinese companies stopped almost all nvidia hw imports
>>
AMD will win.

MLID released a leak that AMD will release a consumer AI card that uses LPDDR5X. Which then due to the density of the modules will enable them to release consumer cards with 512gb of VRAM.
>>
>>106447699
CAMM or gtfo
>>
>>106447127
>you know they became competitive with the west since deepseek r1, right?
Yes, using Nvidia GPUs and possibly data mined from American models.
>still lagging behind on hardware for like 3 or 4 years but that's a far cry from the 10+ years they were at 8 years ago
A lot was done, it's true, but take a look at how long it took for Nvidia to build its moat. To think that China, for all the money poured into these projects, will magically get something competitive in a few years is ludicrous. I'm not even bearish on them, the Chinese are proving to be some tough, smart bastards, but they simply don't have the culture of building software, especially one that must comply with international relations, which these GPUs will have to play their part if they want to truly become competitive (I say this because customer feedback is crucial). I'm sure they get there, though. 37B is no fucking joke.
>>
>>106436159
Does it support Cuda or ROCm? If not it's trash
>>
>>106447699
>Which then due to the density of the modules will enable them to release consumer cards with 512gb of VRAM.
Yeah for a small price of $10000+ lmao I don't buy any of this bullshit. AMD won't even sell their current AI Workstation cards direct to consumer. You have to go through OEMs or used market to get your hands on them they are so anti-consumer.
>>
>>106442678
The thing is it's not optimized. It just has more RAM so it can fit the model in memory. 70B LLMs suck ass, too. There's no real way to make small LLMs perform better because they just contain less information. It's like trying to get a retard to do math
>>
>>106436159
Open source drivers? Only ML bullshit?
>>
>>106448390
>70B LLMs suck ass, too.
idk man, i've had some good success with qwen3-coder 30B A3B. Especially for auto completion you don't need insane models.
>>
>>106448390
AI is more than LLM
>>
>>106448515
in particular than generalist LLMs*
>>
>>106436159
Are those any good for img or video gen?
>>
>>106447699
lmao. amd behaves like a fucking monopolist already for years, they don't give a fuck about cards 3 generations back, all amd is good for fucking universities in europe that don't do anything important because it's cheaper than serious nvidia hardware
>>
>>106447699
2 more weeks
>>
nvidia will never be replaced. they know what they have which is why they're priced so high.
>>
>>106451178
Yeah, just like Intel. Except Nvidia can't even build their own stuff.
>>
File: 1692667454968387.jpg (14 KB, 480x480)
14 KB
14 KB JPG
I kneel.
In 10 years Chinks are going to be a major player in these markets, as their systems are going to be a fraction of the price and they have proper chip production going on.
I bet they'll get plenty of state subsidies to keep their own prices low, simply to fuck with Nvidia's pricing.
>>
>>106436159
people like you don't seem to understand, that a lot of the valuation of nvidia comes from stuff like cuda (and many more).
Nvidias APIs are so powerfull, france wanted to fee over it.
Nvidia investef decades into this infrastructure and the frsmeworks.
Nvidia could only scramble if nvidia decides to let "ai"-powered jeets take over the sources...
>>
>>106443954
china's capital allocation is massive investment into a field almost akin to feeding a river of piranhas with hundreds of small companies fighting viciously for the top. japan's is the softbank gook giving away whatever's left of japan's real wealth to america.
I wish I could feel bad for the japs, but they're genetically gooks whom I despise so instead I'm only wishing for their demise. don't give me the jomon cope btw, those are the natives wiped out by gook colonization and then ruled by an elite class from china, which is why the grammar is korean and the vocab is chinese (and american, kek)
>>
>>106448390
>70B llms suck ass
you sound like you tried them like 2 years ago
>>106451285
kek, one of the biggest reasons intel is almost going bankrupt exactly because they are using their own fab
>>
>>106436416
Can it run Crysis?
>>
>>106453488
It can do better than that: it can generate new Crysis.
>>
>>106436159
Explain why should I NOT buy it over Nvidia or AMD?
>>
>>106454021
Very unpatriotic.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.