do you have executable AI:s?I do have an llama.exe, it doesnt need an internet.You put it into Windows 10 which has never been and never will be, connected to internet.And then you run AI stuff with it.AI must be sent a text prompt before it does anything.I was wondering if there are a lot more of models like this. I just dont feel like connecting to internet.
>>108189886ollama if you want just apilmstudio if you want usable client and also option for API as wellthen download the model from the internet, and run them offline
This is how every local model works
>>108190062>>108189893how do I download a model into it ??I downloaded ollama setup like this:>using windows XP SP3, Pentium 2, 1024MB RAM, Supermium Browser from 2026The windows 10 computer is physcally incapable of accessing internet. But it has better hardware for AI.
>>108190105Go here https://ollama.com/look for the model name and variant you want, it will tell you what to provide to ollama pull like if you see `ollama run qwen3.5:cloud` do `ollama pull qwen3.5:cloud`.
ollama pull
>>108190105What does that give you? 1 token per minute?
>>108190141no idea until I get to try it
>>108189886Download koboldcppGo to huggingfaceDownload Mistral Nemo GGUF. Get the one that's smaller than your VRAM - 1gb or so.I'm running llama.cpp with Qwen Next and Qwen Coder Next for some stuff.Yes, including cooming. It's pretty fun.Am yet to fuck around with image models though.