[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


>helo strix
>dgx spark
>mac mini
Why are high unified memory device only available in cuck cube format.
Why arent' any maker releasing em in standard atx format with tons of expansion options
Its a heroic feat to even get dedicated GPU working when it could have been a simple PCIE slot
>>
>>108505679
they aren't meant to be used directly, you stick them on your network, then hide them in your cuck closet.
>>
File: 1746652460698594.png (290 KB, 454x453)
290 KB
290 KB PNG
>>108505679
let's boycott that name. Let's call it daffodil
>>
>>108505679
the igpu is using the lanes you would expect to use for a dgpu on a desktop board, it's not epyc so it doesnt have extra lanes to go around, if you dont want an igpu box dont buy one of these
>>
>>108505679
Why would you care about unified memory if you're just gonna use a dGPU?
>>
>>108505679
So that it doesn't compete with the enterprise sector. Retard.
>>
>>108505801
uram for llms, fast vrams for stable diffusions
and also to up the QOL as normal computer
>>
>>108505679
the chip does not have enough free pcie lanes, the issue isn't the maker but the chip it's running on.
>>
>>108505819
get a thunderbolt/usb4 dock or get one of the ones that has an oculink and plug in a gpu for diffusion over that then, 4x pcie 3/4.0 is plenty because it only matters for loading the model from storage into vram the 3060 I use for diffusion is on a 4x pcie 4.0 through the chipset and it works perfectly fine
>>
>>108505868
I know
dGPU issue is one thing it also lack extra sata, etc. like most "mini pc"s expansion options is quite limited
>>
>>108505892
so just get usb sata enclosures or build a nas if you need more space than you can get on an nvme or two, if anything get usb nvme enclosures because most of these have 10Gb usb A ports and you'll get more throughput than sata through them, some thunderbolt docks have usb passthroughs you can make use of too just be mindful of the bandwidth you have and what you're plugging into the ports, if the dgpu is just for diffusion it wont be using much bandwidth after loading the model so you'd have it free for storage or networking connections
you will spend more on these docks and such than just building a normal desktop though, just having two separate systems may be the better option if you have the space, the mini pc for llms and a standard atx build for putting dgpus in for diffusion/running smaller llms faster



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.