I'm a hobbyist programmer looking to start using AI tools to increase my productivity, but I'm not sure what I could use without giving all my data away.I tried running some local models (Deepseek R1, Qwen 3B coder, etc.) but honestly they are not very good. If I want to use better tools like any of Google's models or OpenAI's models then I'm giving all my info away for them to train their shit on and store forever. Which is something I don't want.Is there anything that doesn't suck but still has relatively good privacy? Anything you guys recommend to use / look into? Please give me some recommendations.Sorry if I offended anyone by asking this.
Your body has the biological wetware that can be developed into likeness of intelligence. It's totally autonomous and very difficult for jews to hijack unless you explicitly allow them to and does not leak your information without your explicit actions either, it's having really unparalleled level of privacy. Also unlike currently existing convex optimisation solvers, it can handle complex multi-staged goal-oriented tasks thanks to internalised world model at a fraction of time, resources expended and errors made. You should really try that.
>>1546508Qwen3-coder 30B works very well for me. If it matters, I am proficient in the areas I'm using it for.
>>1546512Next you will tell me that gradient descent is less powerful than neurons going "biribiri".
>>1546508>AI>non-jewishits over for you bro
>>1546512Thank you. This was however not an answer my question.>>1546535Thank you. I think I will take a look at the model again.May I ask what your workflow with it is? Do you use something like Alpaca or llamaCPP to prompt it and paste it back into your IDE, or did you hook it up with a special AI IDE and having it edit the code directly? Or something else entirely?>>1546538>>1546543Thank you
>>1546538In sheer brute force - no. But innate understanding of the problem domain in question allows to cut unnecessary brute forcing immensely by decomposing it. It doesn't matter how well you can optimize or scale solving the problem with billion parameters if the same problem can be better expressed as a handful of problems with hundreds parameters, you'll never beat the performance, it's literally the difference in many orders of magnitude of operations that even all hardware in the world cannot feasibly account for.
>>1546508I use llama.cpp with the model deepseek-coder-33b-instruct.Q4_0.gguf (18.8GB). It runs on my desktop with no GPU and 32G of RAM at a whopping 1 token/second.It writes code that compiles, but gets over its head with anything more than one medium sized function. That's ok with me because I use it to show me how to do function calls on ancient APIs (X11 LOL)
>>1546508Create a burner E-Mail with fake details, then use those AI models.>t. created seven burner e-mails for AI usage