[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: G4LQeeeXoAAMYom.jpg (229 KB, 1206x1440)
229 KB
229 KB JPG
>New APU as powerful as A6000 while using 98% less energy
What's the catch?
>>
>>107020059
There isn't one nvidia is getting routed
>>
>>107020059
The catch is that you cannot buy it. I can't even find what instruction set it uses.
>>
Ok but where is the small print? americans are masters of lying for profit, scammers
>>
>>107020175
small print is probably model size. Or it is one big wafer like that other corp and the chip costs over a million.
>>
>>107020175
>REEE AMERICANS REEE
Stop being an NPC.
>>
File: 1711998832713041.jpg (101 KB, 1024x702)
101 KB
101 KB JPG
>>107020059
a new APU you say?
>>
>>107020059
Because it's not an apu. Idk why they're using that term. It's a compute, memory hyperconverged design. (that's what our brains are) It's what should have been worked toward from the start.
>>
>>107020323
So it's a new kind of CPU then?
>>
File: i meant you.jpg (81 KB, 1004x822)
81 KB
81 KB JPG
>>107020323
>>
>>107020059
That they are fraudulent
>>
>>107020059
>tech websites are still so fucking shit in current year that they'll just publish any press release
grim
https://ir.gsitechnology.com/news-releases/news-release-details/compute-memory-apu-achieves-gpu-class-ai-performance-fraction
is this going to be another one of those "intel is bankrupt and finished!" memes but exchanging intel with nvidia? two more weeks.. i guess.

>>107020217
>Or it is one big wafer like that other corp and the chip costs over a million.
i'd guess so, much like what nvidia are producing for their ai shit, sold to meme merchants like openai
>>
>>107020742
>tech websites are still so fucking shit in current year that they'll just publish any press release
To be fair, companies also make meme press releases too:
https://www.youtube.com/watch?v=KmqPreqzXGA
When Microsoft claimed they'd already made a practical quantum computer based on a some vague research paper.
>>
>>107020807
Why Ted Cruz said its real. https://www.youtube.com/watch?v=e6OLZ5TRXHo
>>
>>107020059
>noname company that never produced anything
another grift
>>
>>107020175
2 million cores x1 bit at 600mhz

that shit will need so much fine tuning to properly be used that it wont even make it financially feasable
>>
>>107020059
The catch is it doesn't exist.
>>
File: skaven.png (76 KB, 300x300)
76 KB
76 KB PNG
>>107020059
It only does integer additions and has 64 bytes of RAM.
>>
>>107020059
It uses 98% less energy because it lacks CUDA support.
>>
File: Capture.jpg (31 KB, 644x206)
31 KB
31 KB JPG
>>107020059
A6000 were released 5 years ago, already.
And they weren't designed for ML computation to begin with.
>>
>>107020059
Who the fuck knows how to program for a compute in memory architecture?
>>
>>107020059
>>107023917
Yes research papers have shown that compute in memory could dramatically improve power efficiency since the 80s, but it's such a massive paradigm shift and adds many other limitations, you know like not being able to add more memory without physically adding transistors to the chip.
https://semiengineering.com/the-uncertain-future-of-in-memory-compute/
>>
File: 1758379504763224.jpg (79 KB, 563x408)
79 KB
79 KB JPG
>>107020059
Nice confirmation bias for me that getting rid of central processors/stream units speeds up execution, which /g/ told me was schizo
>>
>>107023975
>not being able to add more memory without physically adding transistors to the chip.
Is that really a problem in the era of advanced packaging?
Besides, virtually no one ever upgrades their memory after buying a system - not consumers, businesses, nor hyperscalers.
>>
>>107020059
no idea about this particular chip but these "low energy" chips usually are for inference only, as opposed to GPUs that can be used for training.
there are tons of these out there. AFAIK, the question is whether they are as flexible as GPUs.
>>
>>107020217
Last time it happened it was a 'simulation only' chip. IE something that they put together a small sample of, simulated it for a week at 1/10000th of scale, then multiplied the efficiency by 10,000 to get their value.
So basically NOTHING
>>
>>107020165
>instruction set
Don't really need it.
Neural networks are specific enough you could likely get away with fixed function pipeline and that's probably what they did here to get those efficiency savings.
>>
>>107020059
>A6000
bro thats 5 years old samsung 8nm trash



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.