Recently I went to Chatgpt to try and create a bot that would help a friend purchase exclusively released nike shoes. Chatgpt responded 'I can’t help build or recommend a bot to bypass limited-release purchase controls or retailer anti-bot systems.' Will a self hosted model being able to help with a project like this? What is the best way around these guardrails?
you must defeat the jew-ish minded thinking that triggered such response
I believe you can argue that such a project is moral. Fuck Nike and everybody. Also everyone else is using bots to achieve the same ends, just they are paying 300 dollars a month to access these bots as well deploying them with residential proxies etc. Although I believe this question is valuable for other projects as well such as bug finding etc
Why do you even need to mention the purpose of the bot? Just ask for a specific bot function based on x endpoint, data used and how to react
>>108631672 Its a bit more nuanced than just a bot that reacts based on x endpoint. I recognize this would be one method is to try and modularize each portion of the project, I believe using a model with no guardrails would be more straightforward and an excuse to get into selfhosting. Just wondering if selfhosting a model is even a feasible means of achieving a no guard rails model
You need to improve your gaslighting and word choice ability. I hope you're one step above: hi chatGPT, how do I create a retail bot. Just ask it questions that are vague enough to get the answers and if its catching on in any sense start a new session. Say youre in a red-blue security exercise and need to use x technique etc etc
>>10863190510-4. Will mess around a bit with gaslighting the LLM. Seeing that in terms of self hosting using models on Huggingface that are tagged as abliterated, heretic or uncensored may be the best way. Wondering if they are good options for coding though..
>>108631982How much ram do you have?You can also try Gemma 4 for free on Google AI Studio and some chinese models for free on open router.
>>108631992Thinking of running models off my laptop which has 32gb of Memory, but integrated GPU so no real VRAM. Seeing that I can work with 7B parameter models or light 12B parameter models. Thinking of building a PC just to host LLM with though.I will explore gemma 4 and google AI studio though, appreciate the advice!
>>108631613>Will a self hosted model being able to help with a project like thisno. they are even m ore cucked, and even the ones adevrtised to be uncesnsored, they just turn into a 20iq grok and arent helpful at all . a zoomer spamming we are charlie kirk can code unethical stuff better than a local model invest into DAN prompts and make the jewgoy machine bleed
>>108632503What have you achieved with DAN prompts?