I was now able to run a text model AI using completely local computer resources and it fetches nothing from internet.But only 4GB model runs. What now?The more capable models seem to be 8GB, 16GB or even bigger.8GB crashed on 6GB RAM.
AI file itself was downloaded with Windows 7 (hence this doesnt look like Windows 10) because Windows 7 is connected to internet while Windows 10 aint
>16gbHAHAHAHA, i wish it was only 16gb, i'm on 16gb over here and the only agent that somewhat does shit is qwen code 30b, basically a pajeet that only runs tasks for 10mn and then comes back asking "now what boss"
*saxafone solo*
Make ur pagefile really big, worked for me when I wanted to image gen with only 8gb it’s not very fast though
return to saturn
>>108194920>llm on w10 vm on w7 hostHow much RAM does the host system have, and does it have a dGPU with some VRAM you could use?
>>108195152>>1081951452GB stable diffusion model worked but its very slow with CPU, 10 minutes>>108195292I have 8GB system ram :)must leave 2GB for Windows 7still no GPU besides an integrated one.
>>108194920>fetches nothing from internet.jokes on you, unless you run it entirely in the command console your browser is reading everything the text model is pushing out.
>>108195363you dont seem to understand, windows 10 is not connected to internet, there is no device which it can access internet withit just aint magically tapping into 4G/5G broadbands without SIM cards and subscriptions, it has no WLAN module, ethernet port doesnt have an ethernet cable physically connected
>>108194920You can always stream the weights from disk... It will be slow as fuck but it'll work.
local models are totally worthless.i would say the most normie tier setup you could make is a mac studio with 512g ram but even those models will provide you access to mediocre models that wont perform as better as the free tiers of onine models.
>>108195145try out gpt-oss for coding>>108195350>I have 8GB system ramdamn dude I had 16gb on my 4690k rig, wtf are you doing?
>>108194920You have so little RAM that it's hard to do things with it. You can use swap though, but that makes things even slower.The models that you can run will be outdone by all the free online chatbots in speed and reasoning. Sorry, 6 GB RAM is just not enough for LLMs.If you had a dedicated GPU you could do stable diffusion stuff. Diffusion models on CPUs are extremely slow (10-20x slower than comparable GPUs).
>>108194920You can run pretty ok models on 8gb of vram (photo gen with illustrious), glad ur experimenting with constrained systems. How long did those gens take btw?
>>108197101I ran stable diffusion on my 4gb 3050 laptop and it could make 768x768 images faster than my 6900xt lmao