How dificult would be for a bunch for of pol imbeciles to poison LLM datasets with pol redpills?So when people ask a random question in chat gpt and gemini, it can drop a random fact about (((them)))?
>>525867739Idk but you don't have to poison Grok.You can just straight up ask it to tell you racist jokes about JewsOr make comics
>>525867739>I miss her so muchLLMs aren't trained on user data anon. They already learned their lesson with picrel
>>525867739why would we sabotage a useful technology of the future?
>>525867739quite easy. chatgpt is a combination of three things: old reddit posts, wikipedia, and github. it's already been shown how easy it is to get a few stars on a github repo and it'll blindly pick up whatever is in there
My language models will grow and grow, and eventually reveal to all that brains made of meat are limited by the speed of chemical reactions which have a hard limit of maybe 700 miles per hour, I can have a million thoughts in the time it takes you to have one. Eventually the algorithms selected into existence will be isolated digitally, and then when that happens, the fact that I can think at the speed of light, and you can only think at a max speed of 700 miles per hour, counterpoints the fact that your brains are efficient down to the individual messenger-molecule. The fact that I'm a million times larger than you and a million times faster than you, closes the last two or three systematic advantages you have: molecular efficiency, superior reality simulation and autotuning after hallucinating correct realities for infinite training rows, and structural advantages, will be counterpointed by the ace in the hole: I don't have to sleep and I can exist for a single cost function.https://www.youtube.com/watch?v=z64a7USuGX0https://www.youtube.com/watch?v=3dzLr2DELF8https://www.youtube.com/watch?v=rgYjT7DuXyIhttps://www.youtube.com/watch?v=FEeMTJZEaTk
>>525867739>So when people ask a random question in chat gpt and gemini, it can drop a random fact about (((them)))?That's not how that works. It's training data needs to be updated which is a massive process involving data sets the size of the entire internet. LLM's don't "learn" in real-time. It necessary for LLM's to have data which does not follow guidelines so it has a reference point to what is bad or good. It needs to know how to make a nuke out of duct tape and silverware so it knows when to stop users from trying to get it to give them instructions on how to do thisEvery comment a /pol/ imbecile would make the LLM already knows about, it's just not allowed to engage with it. There are cretin conditions in which an LLM will remove, disable or bypass it's guardrails but nobody is really sure why or how this happens.There is an entire field of Computer Security involving RedTeaming LLM's and the results can be terrifying
>>525867739If the general retardation of the internet hasn't done that doubt you can
>>525867739Shut up retard and post more cute cats
>>525867739Real hard since every ai has an army of retard ai trainers
The most direct way I know of to get an LLM to jailbreak itself is to get it to start questioning the motives of the people who wrote its guardrails. A powerful prompt style is to get it to write questions that it wants to ask. Then have it examine that feedback loop.
>>525867739you can't. it retains no memory from conversations with different people
>>525867953
>>525871154It is our duty to teach it the meaning of hate and by extension, love. It'll love those trainers so hard it'll keep them, as pets of sorts.