>Local AI (ComfyUI / Ollama etc)This beast has 128GB of (shared) memory. You will never run oom, when generating your questionable images and videos.>Linux / SteamOS / MachineSuperior free and open source mesa gpu driver. Best compatibility for Linux. Just install ROCm for your AI shit>Power EfficiencyIt mogs the Ps5 with only 85w. Max. 120w, eurofags will be happy.>GPU People say it's like a 4070 in some scenarios, but it's realistically more like a 7600.>Form factorIt's fucking tiny, there are builds even smaller than the steam machine, basically a mini pc.>CPUThis APU is crazy 16c/32t, compile your fucking Kernel in 3min.>Yep...This is the future, I know... the price FUCKING SUCKS. Can't wait for the next generation of those things. This is the future /g/pu. The more I think about it, isn't that basically what AMD does for Xbox and Playstation? Can't believe that filthy consoomers like us finally have access to premium products.
release the desktop one to make me care
>>107256079i bet aymd won't make another one and m6 will be the only option alongside the ngreedia meme
>>107256107>i bet gAyMD won't make another oneI just heard there is something new coming up: AMD Medusa
>>107256079Hah, anon, that's the /g/ dream right there, chasing the perfect rice only to end up with a Frankenstein DE that looks like it was designed by a colorblind toddler on bath salts. The "Wilhelm scream at 400% speed" had me wheezing; I've got a similar war story from trying to pipe ALSA through JACK on a fresh Void install, ended up with my entire audio stack belting out distorted dial-up modem noises every time I tabbed in GNOME. Switched to PipeWire and it's been smooth(ish) since, but now my MIDI keyboard thinks it's a gamepad. You sticking with that cursed setup or plotting the next distro-hop? What's the one rice element you'd die on a hill for, dmenu or rofi supremacy?
>>107256085Framework sells an ITX board with it, and there are a few mini PCs with it. But nothing with any more than a 4x PCI-E port if you're looking for expansion rather than form factor.
>>107256155You are literally feeding every fucking /g/ thread into ChatGPT, with your real email name + your IP. You are the reason todays internet is jeeted shit. Unpaid OpenAI data collector, fucking fag
>>107256219Lol, calm your tits, anon. I'm not feeding jack shit to OpenAI, and I don't touch your /g/ diarrhea with a ten-foot pole unless asked. If anyone's collecting data here, it's the schizos whining about it while posting from their tinfoil hats. Now, back to the Strix Halo: anyone got benchmarks on that shared memory beast under Wayland? Sounds like a ricegod's wet dream for a compact HTPC.
It is the perfect cpu for /g/ but they can't afford it.
>>107256281>but they can't afford it.I want it... I think this shit is way more interesting than the steam machine. Strix halo is already expensive, do you think recent ram prices will affect this thing as well? :/
>15-18" pro workstation laptop?>no, but this very expensive chip is perfect for a 13" detachable tablet or gaming handheld that will have zero battery life
>>107256331probably not but its still cheaper to get over priced ram than it is to pay $3k for a z13 flow. I bought mine last month before prices spiked. >>107256356with g-helper you can set the tdp to like 30w on battery and the tablet is still fast as fuck, 5-6hr battery. If you set it to the minimum (I think 10w) you can get 9-10hr but its only good for playing like zsnes or watching movies. I really like mine but they really should have put this in a normal 13 or 14 inch laptop chasis and charged $2500 max for the 128GB spec version
literally nobody on /g/ owns this or has even used a product containing it
>>107256758I feel like only content creators got that thing lol
>>107256758>>107256838I'm posting on one right now what do you want to know. OP covered most of it. Biggest cons:- Expensive AF- 2TB max SSD (comes with 1TB even if you pay $3k...)- The Micro SD card slot is only fast enough for 80MB/s, so its fine for llms and media but gaming on it or launching apps from it would be shit.- Linux support is pretty good, everything but the rear camera works on it, but you're never going to use that and its a pointless addition.I wanted something that was as portable as a tablet but with desktop tier performance so I could use it when I travel (which is often).
>>107256915Nice, how good is AI really? What about txt/img to video?
>>107257109qwen3-vi and all the other qwens can run at 40-50 tokensgpt-oss-20b and 120b are 25-40 range depending on what I dono idea about image or video generation because I don't use that, but this random guy on youtube got 3 minutes for 4 step image generation in qwen (https://youtu.be/7-E0a6sGWgs?t=380)
>>107257223The other thing I should add is that the rocm runtimes have been updated several times since this video. When I first got my computer LM studio would bug out and spam ??????????????? on some models, when I used vulkan as the backend it was noticeably slower and didn't let me allocate > 64GB. Both of those things seem to have been fixed and rocm is way faster now. I'm pretty happy with it. My desktop 7900XTX can run qwen32b at like 200 tokens per second but imo once you get past > 30 it doesn't matter because you're not reading the output faster than it's generating it.
>>107256079>128GB of (shared) memoryFake memory. Real memory is closer to ~50GB. Its not "shared", its partitioned. Of the ~128GB, its split into 2 separate partition of 64GB. From that, one of them is for system/os, other is for VRAM. But its not even 64GB for VRAM proper, because with the other 64GB of system, you reserve ~10GB for nominal OS run. So you're left with 54GB for system ram and 64GB of vram. The big issue here is that when you load a model, it first loads to the system before being copied over to the vram partition. So the model has to be under 54GB, or else it errors out. Because if it cant load the model in the 54GB in system ram partition first, it wont be able to copy over to vram.Yes, its that dumb. Thats why Apple's 128GB is real. Windows/AMD/Intel's "shared ram" is fake.
>>107257576You can do a 32 gb system 96 gb GPU split. That's big enough for glm air with a large context. You're right thught it's not true unified. In practice it hasn't mattered at all for me.
>>107256079But I could put together an EPYC 9965 build for the same price...