i plan to buy the picrel (Beelink Ser 8 with 8745HS) to be used as my new home server for hosting adguardhome, mastodon/pixelfed, random static websites/self-hosting stuff, a meta search engine, and maybe open-webui with ollama and/or sunshine+moonlight. do any one of y'all think this mini pc is a viable and logical solution for any of this? yes, a thread died for this.
for a home server you shouldnt rely on something you cant easily upgrade and fix yourself. just build a micro atx build and use that. the size alone is not worth it
>>106522691ollama sucks on this thing, not even vram unless you run the tiny models>>106522735it's a good computer and you can upgrade ram / ssd. They also have a knock off mac studio (GTR9) that comes with 128GB ram, but I think it only gets like 5-10 tokens per second on larger models. My $1000 amd gpu gets 150tokens per second for reference. These things aren't really for AI.
>>106522691>around 500$Unless you really want it for the size a small desktop build is way better, will cost you the same and is more powerful >ollamaPerformance will be bad
>>106522691These things are made in third world sweatshops, you shouldn't buy this. https://www.youtube.com/watch?v=RWXI1xItuNcInstead, but something like System76 (USA) or Tuxedo (Europe) that hire local workers and pay them fair wage, while contributing to your country's economy.
>>106522735i'm already hosting most of the things i had mentioned on an rpi4 with 2gb ram. size alone is the sole reason i'm inclined to pick up this device as it will be sitting on top the router or be bolted onto the ceiling due to space constraints. i also need something that i can throw into my backpack before bailing off of my place. >>106522769>>106522790i thought i could just throw in a gguf quant. gemma 4b model along with open-webui and get a decent local chatbot. perhaps i can host the frontend (open-webui) on it, then serve the llm from somewhere else. >>106522805i'll definitely consider that, if they're selling mini pc's.
>>106522691good plan, OP. 780m is strong like ox. Zen4 cores will make the ladies wet
>>106523074That's what i do. I have llm stuff running on a gaming computer and just access it over the net on the beelink. It can definitely do gemma, but you'll have to change the settings in the bios so that the vram available is high enough. It's really fast, I would use this as my only PC if I didn't play the occasional PC game
>>106522805>third world sweatshopsAs far as factories go, this one is actually really clean and safe.>System76 (USA) or Tuxedo (Europe)They literally sell rebadged Taiwanese Clevo laptops
>>106523074>i thought i could just throw in a gguf quanti'm pretty sure you'd just be stuck with cpu inference
>>106523280>They literally sell rebadged Taiwanese Clevo laptopsTheir desktops and keyboards are all made in Denver though, and OP isn’t looking for a laptop.
>>106522805>this videowas so fucking beautifulshit was done in harmonyabsolute cinema
>>106523280BeeLine quite literally only does the casing as you can see in the video.>>106523809Yes, but I posted the wrong video where some pajeet stole the original and did a voiceover and somehow that comes first in the search results, not the original video. Original video is here https://www.youtube.com/watch?v=ohwI3V207Ts
>>106523829I have tinnitus so I never watch videos with garbage sounds so idgaf
>>106523074Mini pcs are in a weird spot for LLMs right now. They have the ram size to host large models that are prohibitively expensive to run on a bunch of gpus but the speed is so bad it's not worth it. You can run small models pretty fast on one but a 3060 can run the same model much faster for way less $.>gemma 4bDon't. Qwen 4b is the only <12B model I would consider coherent enough to use as a googler replacement but it's pointless when you have >32GB RAM, don't run a 2GB lobotomized model on it.>perhaps i can host the frontend (open-webui) on it, then serve the llm from somewhere elseikllama backend and an IQ quant of GLM 4.5 Air. You'll get between 5 and 10 t/s until you start hitting 4k+ context then it'll slow down a lot.
>>106524813so, i should just forget about running an llm on it thats what you are saying. i have a desktop with a 3090, which is what i plan on using for serving the llm, that i do not feel comfy exposing to the internetz, hence the beefy mini pc for home server. also i could off-load models that come with open-webui, as well the kokoro-tts that i run for, well, tts. (i'm aware of edge tts.) are those smaller models that bad? i have seen people praising them in the local llm threads.