[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: renge-shrug.gif (133 KB, 220x226)
133 KB
133 KB GIF
If RAM is so expensive and AI needs a lot of it, why won't they just optimize the AI?
>>
File: look at the bottom.png (166 KB, 639x715)
166 KB
166 KB PNG
>>107626529
Fun fact: Most LLMs are made in python.
>erm, so what chu-
If you would please consult my graph.
>>
Scam altman does not want that because he wants a monopoly. It's not working because his competition is working to optimize while he's trying to brute force while buying all of the supply to cripple them.
>>
>>107626588
Source?
>>
>>107626529
The only people out there with the know-how to optimize ML to run efficiently on graphics cards are Nvidia engineers
Nvidia wants to charge fat stacks of money for pitiful amounts of vram
>>
>>107626742
sorry I was petting my cat xd
https://thenewstack.io/which-programming-languages-use-the-least-electricity/
There's also this one which seems to corroborate it. Python is consistently at the bottom of these tests in terms of energy and ram efficiency.
https://www.sciencedirect.com/science/article/pii/S0167642321000022
>>
>>107626529
UOOOOOOOOOOOOH NAKADASHl IN RENGE!!!
>>
>>107626529
Let's say you can make an AI that runs in 128GB memory and scores 96% on some benchmark. Now let's say that a bigger model scores 98% but requires 512GB.
The sensible and responsible thing to do would be to stick to the much smaller model that's nearly as good. Unfortunately while you're busy doing that, your competitor has just released a model that requires 2TB and scores 99%. Nobody cares about anything other than bigger number = better, so your sensible balanced model immediately becomes irrelevant. Better start working on a 4TB model that scores 99.8%.
>>
>>107626529
It's probably cheaper to buy some more ram than to optimize the models.
>>
>>107626529
No need to when they have all the money in the world thanks to dumbass investors.
>>
>>107626588
Nocoder spotted
>>
>>107626529
>optimize the AI
plebs can’t be allowed that



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.