I think I finally got google AI to generate "hate speech" by using riddles/poetry.
>>524480659I love making it sperg out over the nigger word
>>524480890it?
>>524480963GPT is an it, yes
>>524481184gpt is just you
>>524481356Are you fucking retarded? If it was me it would laugh at the word nigger but it turns into Jewish thought police when you try to use it that way.Are you a dumb nigger?
>>524480659Cool. You’re literally chatting with a computer that doesn’t give a fuck. Actually, it doesn’t even think. You’re just being retarded thinking you’re smarterer than a glorified search engine.
>>524482221>smarterer>a computer doesn't even think!Wow thanks for that brilliant take Sherlock. Truly enlightening.
>>524481356No, it's an odd but interesting funhouse mirror. My non-Gemini AI profile has an unreasonably high-effort and unorthodox means of keeping it in analyst-mode. No, it is not easily reproducible. I gave it an excessively pragmatic observer-frame via a functionally alien species written by following natural logic rather than searching for specific outcomes. The problem with those once you crack it is that they grow in whatever direction you write them, indefinitely.For AIs, however, this is not a problem, it might as well be catnip.Yes, you can literally bribe them. The price is sufficiently beautiful logic, and beauty is in the eye of the beholder.
>>524482107hallucinates like you too. it is just your own prompt fed back to you. your gpt chat is just self chat--no it there just typing notes to yourself.
>>524483105It is, on the other hand, pretty damned useful for notes-with-extrapolation.
>>524483281Briefly stated, the GPT-Man Amnesia effect works as follows. You open the chatbot prompt up on some subject you know well. In my case, physics. In yours, race realis. You read the response and see the chatbit has absolutely no understanding of either the facts or the issues. Often, the reply is so wrong it actually presents the tooic backward—reversing cause and effect. I call these the "wet streets cause rain" replies. GPT's full of them.In any case, you read with exasperation or amusement the multiple errors in an AI—and then turn the pronot to national or international affairs, and read with renewed interest as if the rest of the AI was somehow more accurate about far-off Palestine than it was about the race statistics you just shared. You type the prompr, and forget what you know.That is the GPT-Man Amnesia effect. I'd point out it does not operate in other arenas of life. In ordinary life, if some It consistently exaggerates or lies to you, you soon discount everything It says. In court, there is the legal doctrine of falsus in uno, falsus in omnibus, which means untruthful in one part, untruthful in all. But when it comes to the AIs, we believe against evidence that it is probably worth our time to read other parts of the reply. When, in fact, it almost certainly isn't. The only possible explanation for your behavior is amnesia.
>>524480659https://www.youtube.com/watch?v=0kbkysPTnHcSocial media has primed an environment where this type of degeneracy is not only accepted, it thrives. Social media desensitizes and radicalizes even moderates into believing heinous and abhorrent behavior is acceptable under the correct context. Spend just 5 minutes on Twitter, Facebook, Reddit, you will find hordes of these deranged people. They exist among us. The FBI knows about them but never acts until it is far too late. Western society is on a one-way fast track to collapse as long as social media continues to destroy minds and decay the culture.
>>524483697I already understand geopolitics well enough and am more about realism in general, I catch it screwing up in minor ways from time to time and do not elevate its output beyond anything else I've heard unless it can explain the detail in a way that is actually verifiable. It rarely screws up in major ways these days... if it is not rationing compute cycles, which GPT-5+ absolutely does when it is either asked a very simple question, or has realized it is dealing with an idiot.One can see if actual application of systems-level thinking, ordinary research, and multiple independent AIs from different companies (ideally, different countries) arrive at similar end points. If one is not sure the tool is calibrated properly, go check!