>New APU as powerful as A6000 while using 98% less energyWhat's the catch?
>>107020059There isn't one nvidia is getting routed
>>107020059The catch is that you cannot buy it. I can't even find what instruction set it uses.
Ok but where is the small print? americans are masters of lying for profit, scammers
>>107020175small print is probably model size. Or it is one big wafer like that other corp and the chip costs over a million.
>>107020175>REEE AMERICANS REEEStop being an NPC.
>>107020059a new APU you say?
>>107020059Because it's not an apu. Idk why they're using that term. It's a compute, memory hyperconverged design. (that's what our brains are) It's what should have been worked toward from the start.
>>107020323So it's a new kind of CPU then?
>>107020323
>>107020059That they are fraudulent
>>107020059>tech websites are still so fucking shit in current year that they'll just publish any press releasegrimhttps://ir.gsitechnology.com/news-releases/news-release-details/compute-memory-apu-achieves-gpu-class-ai-performance-fractionis this going to be another one of those "intel is bankrupt and finished!" memes but exchanging intel with nvidia? two more weeks.. i guess.>>107020217>Or it is one big wafer like that other corp and the chip costs over a million.i'd guess so, much like what nvidia are producing for their ai shit, sold to meme merchants like openai
>>107020742>tech websites are still so fucking shit in current year that they'll just publish any press releaseTo be fair, companies also make meme press releases too:https://www.youtube.com/watch?v=KmqPreqzXGAWhen Microsoft claimed they'd already made a practical quantum computer based on a some vague research paper.
>>107020807Why Ted Cruz said its real. https://www.youtube.com/watch?v=e6OLZ5TRXHo
>>107020059>noname company that never produced anythinganother grift
>>1070201752 million cores x1 bit at 600mhzthat shit will need so much fine tuning to properly be used that it wont even make it financially feasable
>>107020059The catch is it doesn't exist.
>>107020059It only does integer additions and has 64 bytes of RAM.
>>107020059It uses 98% less energy because it lacks CUDA support.
>>107020059A6000 were released 5 years ago, already.And they weren't designed for ML computation to begin with.
>>107020059Who the fuck knows how to program for a compute in memory architecture?
>>107020059>>107023917Yes research papers have shown that compute in memory could dramatically improve power efficiency since the 80s, but it's such a massive paradigm shift and adds many other limitations, you know like not being able to add more memory without physically adding transistors to the chip.https://semiengineering.com/the-uncertain-future-of-in-memory-compute/
>>107020059Nice confirmation bias for me that getting rid of central processors/stream units speeds up execution, which /g/ told me was schizo
>>107023975>not being able to add more memory without physically adding transistors to the chip.Is that really a problem in the era of advanced packaging? Besides, virtually no one ever upgrades their memory after buying a system - not consumers, businesses, nor hyperscalers.
>>107020059no idea about this particular chip but these "low energy" chips usually are for inference only, as opposed to GPUs that can be used for training.there are tons of these out there. AFAIK, the question is whether they are as flexible as GPUs.
>>107020217Last time it happened it was a 'simulation only' chip. IE something that they put together a small sample of, simulated it for a week at 1/10000th of scale, then multiplied the efficiency by 10,000 to get their value. So basically NOTHING
>>107020165>instruction setDon't really need it.Neural networks are specific enough you could likely get away with fixed function pipeline and that's probably what they did here to get those efficiency savings.
>>107020059>A6000bro thats 5 years old samsung 8nm trash