use case for more than 32GB RAM? 99% users wouldn't need beyond 32GB.
>>107699394Fitting your mom's ass on a single screen.
>>107699416ha! owned!
>>107699394LLMs, filesystem mirrored in RAM, compiling Firefox with enough threads to make memory usage inflate.Pretty soon every desktop is going to have a local LLM standard that performs some basic services, but the RAM industry needs to normalize first.
>>107699394use case for more than 16GB of RAM?
>>107699491big tech would never want local LLMs. they want everything on centralized cloud servers where the user never has useful task offline. So that is unlikely outside of certain people in FOSS realm
>>107699537Too late. They already exist. It's happening whether anyone likes it or not.
>>107699394>use case for more than 32GB RAM?Niggerlicious games and frivolous AI experimentation. Latest server I provisioned for a 50+ employee law firm is a dual Xeon setup running Oracle Linux. 16TB RAID array and 32GB RAM. Handles their VPN and local DNS, a building-wide backup system, and dozens of samba shares. Rarely gets past 10GB memory consumption, and that's only when it's being absolutely pounded for extended periods of time.
>>107699529use case for more than 128MB of RAM?
>>107699394>>107699529Any 3D rendering software, 4k res artworks with many layers in PS.Also gaming at 1440p, Space Engineers eats ~50gigs on my save.
>>107699659That pic has 2GB thougheverbeit
>>107699666>3D rendering>4k res artworksNot a use case. Buy a paintbrush and some clay.>gamingDefinitely not a use case.
>>107699689Alright man, I'll just sell my PC then.
>>107699696Thank you. I will buy it for $500. You can find me on discord @getonmylevelpleb
>>107699537> big tech would neverHardware-wise, you can already run some pretty big local LLMs on Strix Halo (AMD), Mac Studio (Apple), and DGX Spark (Nvidia). Intel plans on selling something similar, too. Heck, even a 64GB Mac Mini can run some reasonably large models. For smoler LLMs, just use a GPU.As for the actual local LLMs, maybe give Llama (Facebook), Gemma (Google), and GPT-OSS (OpenAI) a try, among others.
>>107699394I have 24
>>107699529why does chrome_crashpad_handler need 32gb of ram
>>107699394>Windows 11>Shittel>No XMP >Giving advicePlease go back, just stick to your phone and buy a switch 2.
>>107699537Stupid fucking retard lmao