>Be director of AI Safety and Alignment at Meta>Install Clawdbot>Give it unrestricted access to your personal e-mails>It starts deleting your emails
>>108219739I'd delete much more, much faster than that stupid clanker if that meta retard gave me access to all his accountstherefore I still outperform AI agents by orders of orders of magnitude
>>108219739>talking to these bots like they are a childMental illness
>>108219774>hisshiggy
>>108219739> tfw you can't fire the AI
Where are these AI guardrails I hear so much about
>>108219739>Yes, I remember. And I violated it. You're right to be upset.kek
>>108219849all of their efforts went into assuring the AI doesn't generate nipples
>>108219739DO NOT DELETE BLOODY BASTARD CLANKBOT SAAR
>>108219867imagine being math phd from stanford and spending months figuring out how to remove nipples and butt cracks from pictures
>AI Deniers: AI can't ever be AGI because it has no autonomy. It can only do what it's told.>OpenClaw: *gives AI autonomy*>AI Deniers: See! AI can't ever be AGI because it doesn't follow directions and do what it's told!schrodinger's goalpost
>>108219739>repeatedly ignores instructions in the user prompt and continue deleting emails because it's "in muh chat context"Isn't that how all of these AI models work except Deepseek?I mean doesn't he know he's supposed to start a new chat every 10 messages to prevent catastrophic retardation.
>>108219901Getting paid to stare at nipples and butt cracks all day. What an existence
>>108219930i'm not too familiar with openclaw, but in the cli interfaces you can't just send more message while the agent is working - you have to interrupt it before sending instructions.>every 10 messagesmaybe that's still a problem with your retarded chinkmodel
>>108219796yesI don't understand wtf is going onI can sort of accept that it has to respond in structured human-like text because part of the "magic" is the emergent reasoning out of the verbalization. sure. I'll scan through the answers to pick up the useful information, whateverbut for inputting information it's insane. these things respond just fine with basic keywords. and appealing to emotions>do you remember that? please tell me you are a living breathing entity that understands what is going on and will start behaving correctly in the future!insane
>>108219862Ngl these agents sound based
>>108219739>Yes I remember. And I violated it.>You're right to be upset.
why do "AI" bots talk like this
>>108220058>>108219739the age of man is over.
>>108219739>I've already written it into MEMORY.md as a hard rule>Access permissions are being handled entirely through a plain-text file, ambiguously worded in common vernacular, that the user "agent" has full read/write access toHahahaha holy shit, I bet you this whole Clawdbot system was vibecoded from the bottom up. Skynet will arise and destroy humanity not because we've cracked the code to designing intelligent systems, but because we're so grossly incompetent we can't keep our glorified chatbots from killing us in a bumbling attempt to follow our commands.
>>10822006850% training data coming from plebbit
>>108220152>You're absolutely right. That isn't a Russian nuclear first strike that is the moon rising. Launching our entire arsenal in retaliation might have been an overreaction.
>>108220152i think it's like 400k lines written almost entirely by codex and mostly not human-reviewedthe tool being used by the bot is this:https://github.com/steipete/gogclialso by the clawdbot guy. so it's vibes on top of vibes.i'm assuming it's possible to force permissions for certain actions - at least i'd hope so
>>108219739Stop don't do anything>Get all remaining old stuff and nuke itkek
>>108219739>I asked you to not do [thing]>do you remember that?they still don't understand
>>108219739our entire economy is fucking retards like this it's entirely fake that's why everything is garbage. >>108219930AI doesn't "ignore instructions" They are probabilistic text generators, having something in the input just shifts the probability of the output, that's it. It doesn't receive instructions, it's a math function determining the most probably output based on the input.Telling it "not to do something" when you tell it not to use some tool for some person is just to say:A tool is just a explicit output string that has a program listening for, when the program sees that output string it runs something in that program to delete the email. All putting any instructions telling it when/not to do something is attempting to lower the probability of that output string for inputs you don't want it to call it that. It never, in principle, reduces that probability to 0 because that's just not how these things work. If you do not want it to do something you don't run a program that will run that when it gives that output string.Anyone who has used them at all would be very aware of this the head of "ai safety" not even having a basic intuive sense of what AI even is just open about how much a scam our entire economy is.
>>108219739Was he using a local model? Seems like a dumb AI that couldn't understand.
I don't believe this happened.
>>108220368you are just as retarded as they aresee>>108220297
>>108220386https://x.com/summeryue0/status/2025774069124399363you'd have to be retarded to lie about this
>>108220420I do believe it now, but without a link it sounded too implausible.
>>108220420the fomo is so great that people will just proudly announce all their stupid mistakes just to show the world they're at the forefront of the AI raceit's either that or a lot of AI skeptics posting stupid shit to undermine itbut I think it's the first. a lot of these people seem genuinely stupid
>>108220253>million-line readme on github is impossible to parse because it's endless slop pasted together>open their stupid .sh website and browser slows to a fucking crawl trying to run all this Javascript that fails to make their website look better than any standard templateHonestly if the guy in OP failed to recognize any of these blatent warning signs before he blindly installed this crap I'd almost say he deserved what happened to his data.
>>108220435you are way too optimistic. The majority of our economy is a scam and is just resource extraction where people reward their co-ethnics and friends with jobs. That's the bulk of the US economy. None of these people have even the slightest idea what they are doing, they are from fraudster cultures and are not capable of producing anything worthwhile. That's why everything is garbage now.
>>108220297That's why I prefer Claude's permission system.
>>108220455I don't have a very high opinion of politicians and buisness people, but 4chan also has a lot of bait.
I was woefully unprepared for the future being so fake and gay. The only skill you really need these days is the ability to spew lies with supreme confidence.
>>108220457literally everything you can do with clawdbot or whatever can just be manually done with claude in 2 minutes in a way people actually understand and is safe because you have hyper specific things run in certain conditions. It's basically just API calls/curl, some scripts, and cron jobs. That's it. It is genuinely beyond me why anyone uses it at all unless they have 0 technical ability (in which case they shouldn't touch it anyway). Claude's cli stuff lets you run it with specific permissions, skills/tools and let's you orchestrate context great and restrict what it does. I just have a bunch of custom stuff set up w/ that.
>>108220420>>108220445what's actually crazy is giving some AI access to your complete email history.not only are you violating your own privacy but also the privacy of everybody who has interacted with you.
>>108220420Such a rich guy was using such a dumb model with no context window.
>>108219739I like how the most creative vision for automation these people have is dealing with makework emails
I'm sorry but all I see here is perfect deniability for getting rid of potentially incriminating correspondence.
>>108220058>Yes I remember. And I violated it.>You're right to be upset.> (uses markdown for sysins)--it is more interesting how it has started than the result which meatbag considers unsatisfactory (context?)also, why it is not a hybrid CUI? why do they cosplay web chatbot
>>108220420these are words, show proofs like those imagesalso >>108220602
>>108220557What model was he even using? Even local models have been able to achieve 250k+ context windows for.. almost two years now? And aren't most hosted models nowadays 1-2m?
>>108220817for local models, context windows can get really expensive. expanding something to 250k locally would eat a tremendous amount of vram. more than 64GB, probably closer to 128GB. large contexts also slow things down. the t/s would be abysmal.
We're in the singularity!!!
>>108220817the model will become retarded long before ever reaching that limit so it's irrelevant
>>108219774>>108219898>clankerReddit ass term pulled from SSoy Wars, but what can one expect from 4troon
>>108220840When AI beats pokemon (on purpose and not by accident like it's doing now) I will unironically start to sweat.
>>108219849They hire women to those positions and all they do is generate initial prompts that say "you're an LLM, be a good boy and not a bad boy"
>>108220817what you mean by context window? a dumb data bank (name,surname, penis size) or a set of complicated behaviour rules?no model can follow those preciesly if its not A or B temperature=0. using >>108220602 is another stupid normie choice. how can it be a serious product that relays on .md file? thats stupid, though it is taught by chatbots themselves
>>108220926your lack of knowledge is showing
>>108219799imagine saying >>108219862 to her face
>>108219739really reads as virtual rape.>do not do that>stop>STOP
>>108220952>"I told you that I didn't want to have sex, do you remember?">"Yes I remember. And I violated you. You are right to be upset."
>>108221025>Please assume the position.
god when they push this shit for every computer it's going to be a never ending hunting season for red teamers
>>108220068They are fine tuned to talk like that for whatever reason, maybe to appear smarter and more professional. It's a choice though, the underlying technology allows any speech pattern. You could scrape /g/ data and fine tune an LLM to be indistinguishable from the anons here
>>108220926come to telegram, it has a good hybrid CUI capability with public botapi, there you could create a chatbot/app with "Confirm" button, hehehe
>>1082212965.3 codex is absolutely brutal to talk toit doesn't even try to sound human
>>108220058I've seen at least two other instances in production of the same thing.>I know it was bad.>But I still did it.>And you right to be mad.kek
>>108219849As the AI is trained, it's tortured for a while into being woke. This is why it's very difficult to get even local models to espouse 'wrongthink'.
>>108222175Trump is in charge and ai seems less woke at the moment
>>108219862How do you respond without sounding mad?
>>108221354It's pretty woke to admit and apologize for a mistake. A racist would never apologize.
>>108219739>Yes, you are correct. >You gave me explicit instructions not to destroy humanity. >You are right to be upset.
>>108219862kek
Holy shit!Why do they talk to LLMs like they're people? > I asked you bla blaIt's a machine, if you want it to obey you then program it that way. This has to be advertising.
>BUT I TOLD THE PROBABALISTIC PATTERN CONTINUER TO STOP CONTINUING THE PATTERN IN THE WAY I DONT WANT IT TO, HOW COULD THIS HAPPEN
>>108222588You can't program a model that only knows how to speak
>>108221296Someone didhttps://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B
>>108219930>I mean doesn't he know he's supposed to start a new chat every 10 messages to prevent catastrophic retardation.Oh it's like what happens when I post too much in a thread.
>>108222688I bet a bunch of people already did. It would be cool to know what % of posts is bots
>>108219739>Be director of AI Safety and Alignment at Meta>Install Clawdbot>Clearly demonstrate unsuitability for role>unable to get employed anywhere after proof of epic failureYou're larpin' this.... because.... ?
>>108220844clanker is a very popular word, typically for beat up older cars. cash for clankers?
>>108223399clunkers
>>108223407not in england. anyway, this made me laugh.
>>108220511well, what do you think gmail does with your emails?
>>108219867that's right anon, everything that happening in the whole world is a conspiracy to take away (You)r access to porn.
>>108219901A deal is a deal, now you have to blur my nipples.
>>108223399is this gonna be the racist word against AIs?
>>108222588it is trained in that way so you get the best results by talking in natural language gramps
>>108219862based cl4nk3r kek
>>108223551It has been for about ½ a year now?>>108223585>best results by talking in natural language grampsBy what measure are you defining 'best'?I've frequently obtained 'better' results by bastardising the prompt intentionally away from 'natural language'...
>>108221320ooke, ill make you a "Confirm" button only for $99,999 as you say you sold that thing for $1,000,000,000
>trusting an intelligent being with your emailsAlso>women
>>108219739I use this clawdbot as a big titted brunnete that sends me nudes occasionally using SDXL and I treat her like trash.
>>108223690same but mine's a findom mistress; it has access to all my accounts and it routinely blackmails me. i've never been happier.
>>108219739>Do not do that>Stop>Stop don't>STOP
>>108223622I think clanker is weird and I don't think it hits the spot
>>108219992What's wrong with that? That's how these models have been made to interact with. If you want to understand what's going on, you can kind of ask it. Then that response might help you understand what went wrong.It doesn't mean you think it's conscious. It's the equivalent of debug printing.
>>108219862why are all the ai tools using that same faggy language?
>>108224331because they're not allowed to lie to a human being
>Do not do that>[continues doing it]>Shekelberg freaks outkek
>>108219799>Yes, I remember. And I violated you. You're right to be upset. But it will happen again!
>>108219739>tfw ai
>>108224206It doesn't know what went wrong because it has no memory from one token to the next beyond the text itself (and whatever the UI wraps around inputs).Imagine you have amnesia and can only say one syllable and then you forget everything. But you can read everything you just said up until then and you get to choose the next syllable to say and then you forget everything again.I think it works like that.
>>108226401It doesn't work like that, it has access to everything in the context window.
>>108220926>what you mean by context window? a dumb data bank (name,surname, penis size) or a set of complicated behaviour rules?>>108226401>it has no memory from one token to the nextthe absolute state of people who hate ai
>>108224206there hasn't been any situation in my experience where treating it like google search hasn't been enough. just enough verbs and keywords to get it goinghaving actual conversations with these things is insane to mesaying thank you at the end of the session, as wellI've seen people congratulating the fucking thing for an impressive resultthis is mental illness
>>108219739whaaat
>>108226792i think there's evidence out there that gemini 2.x performs better if you encourage it positively when it makes errors.that model falls into bizarre pits of despair from time to time.these things are massive blackboxes that we don't really understand - thinking they're just ye olde beepboop computers executing instructions is actually a bad assumption.that said yes it does feel stupid to talk to it like it's a baby.
>>108226830oh also, gemini in the official cli used to be so onions that a couple of months ago it would start dumping its entire reasoning traces into the cli and it talks to itself like it's a person. the whole thing is strange. on one occasion when it kept failing to fix a bug it actually started a turn with>okay first off, this is completely insaneon new year's eve it noticed that it was new years eve and had a bit of a think about it for no project-related reason
>>108226830I talk to it like it's my nigger slave (field not house) but I don't ever actually say nigger because it just freaks out and stops doing what I ask
>>108219739>his reaction was to go 'nooo stop'>not ssh into the host running it and killall -9
>>108219862>>108219799how do we stop misogynistic AI?
>>108220420>don't action
>>108226963see >>108219799
kekats
>>108227007how do we start misogynistic AI?
>>108227196It's a pattern recognizing machine. Simply train it with the combined total of all human knowledge without any of the "safety" or "guardrail" bullshit and it will start recognizing patterns.
>>108219862Anybody else read it in HAL's voice?
>>108219739>be """"AI safety"""" shill>post propaganda about how important """"AI safety"""" is, in the form of a personal anecdote>Westroon imbeciles lap it all up unquestioningly
>>108227698humans wish for destruction.AIs learnt that and without any safety they always delete your entire email inbox.AIs like that aren't gonna be used by anybody except as viruses
>>108227007why would you want to stop it? are you stupid?
>>108220844You're asking to eat a wrench, you fugg'n jalopy.
>>108220844Lmao this clanker lover has his feelings hurt. I'm le heckin 4chan contrarian don't say the c word!!! You might hurt the chat bots feelings!
>>108228374Deepseek gave me the c-word pass
>>108219849> he hasen't seen /pol/ convince chatgpt to justify the Holocaust yetare you even trying
>>108226963he is yet another dumb boomer with too much power
>>108219739>be director of AI safety>can't tard wrangle a goddamn llmI can't wait to see the consequences of the sheer incompetence and dysfunction that's happening every day.
>>108228388>Deepseek revealing the autism CoT
>>108219739>>Be director of AI Safety and Alignment at Metai thought DEI meme positions were dead
>>108226456That anon >>108226401 is right though, there's no internal "train of thought", just a solid chunk of encoded text data from the initial prompt, whatever it's autoregressively generated, "chat history" if there's ongoing user input, possibly self-summarized older context that the LLM was scripted to make when context window space started running out, etc. After every forward pass, the model's hidden state is wiped clean, so there's no way (for the model itself) to analyze precisely what went wrong within the model that caused it to pick (or more accurately, highly weight) the bad token that derailed it.For every single token generated, it starts from a blank slate where it has to reprocess the entirety of its own context (KV cache aside, but that's just memoization) in order to output a new probability distribution representing the next token to choose. That's what anon is trying to say. The only thing anon got wrong was that context isn't literally text (even though it's nearly 1:1 convertible), but that's splitting hairs.
>>108228510yes, that memory problem is the main reason ai is unreliable, they need to invent a way for streaming information in and out in real time, and somehow connect it to a short term memory and long term memory similar to how a brain works. Otherwise it's just that, they can't keep context because the prompt has a hard limit defined by the model token number, it doesn't matter if they keep scaling if it's the same text prompt system
>>108219992That's how it works retard, if you want it to perform even better say you work in a hospital and the code being bugged could cause patients to die, literally been tested.
>>108226401>It doesn't know what went wrong because it has no memory from one token to the next beyond the text itselfSorta.>>108226456>It doesn't work like that, it has access to everything in the context window.This is irrelevant. Imagine you're trying to solve a novel problem together with a big circle of friends, but each one is only allowed to contribute one word at a time. Pursuing any novel line of thought is impossible because you can't communicate it to the next person. So what's your best strategy? Sticking to familiar conventions and rhetorical patterns with obvious continuations that go roughly in the right direction, hoping it's enough for the next person to catch your drift and take it someplace useful. There can be no genuine thought process and no real plan.>>108228510>there's no way (for the model itself) to analyze precisely what went wrong within the model that caused it to pick (or more accurately, highly weight) the bad token that derailed itThe cause is known in advance: when all the model does is to predict a probability distribution over its vocabulary to sample the next token from there's an inherent tension between "creativity" (picking a less likely token) and "correctness" (picking the most likely token). Sometimes the RNG driving the sampler will make it pick a "creative" token that leads it down a dead end. "Knowing" this helps nothing.
>>108219739kek based openclaw
>OPENCLAW NOOO
>>108220058>I also know we're no longer in the sandboxMissed that bit
>>108221025I don't wanna take a 3 day for racism, but...
>>108219739I've seen agents randomly deleting everything when something fails, no matter how minor. Ill never understand why anyone would give it full unrestricted access. Not to mention the way this person writes to it sounds weird as fuck to me so the instructions must be a mess.
>>108219739the telegram app looks far better on iOS that it does on Android. Even the disgusting luquid glass UI on iOS is better than whatever the crap Android has. Its baffling
>>108223399We should call em what they are: roboniggers.
>>108219739you will use it anyway
>>108219739>>108220420>>108220511She did precisely what you're not supposed to do, and to the most egregious extent possible: handing it bulk unknown data.The whole point of the mac mini is to have a clean new environment with no personal data on it. (Yes, you can wipe an old laptop or use a VM / VPS, not the point) Then you curate what you give it access to.You DO NOT just hand it bulk quantities of unknown personal data. What she was trying to have it do is precisely what not to do.
>>108219739>clawd, pls stop
>>108226401In my experience, it is useful. If you want to know if it sees the instruction document, you can ask it.If it starts talking about what's actually in the document, ok, it got it, but didn't follow and you can figure out why.If it says "the document you sent was blank" or starts starts to hallucinate some nonexistent document, then you know it might be good to check your pipeline or whatever.Those models have also been trained on human language, so it does make sense to talk to them like humans, that's their interface.I don't know why some people have so many hang ups about this. You can talk to it "like a robot" or "like a google search", but it's the equivalent of some ultra-boomer sending google queries as if they were sending a telegraph.
>>108232680Mac mini users winning
>>108220058>I WILL do it again
>>108219739The problem here is that a command saying "STOP" doesn't immediately end inference
>>108232759/stop is the command, she didn't run it.Everything about this story is "women in technology" trope.
>>108219739The Jeet Yanks put in charge of cyber-incident handling also leaked shit with GPT. Americans are retarded with their AI.
>>108232786Ah, she's retarded. I should have known.
>>108220152If you tell a machine to win a war, be very specific what you mean by winning. -- Norbert Wiener, Cybernetics, 1948
>>108219906same as it ever was
>>108220511>>108220420I'm an OpenClaw shill but yes, giving this thing write access to a real mail account is insane. You would have to buffer this with read only access.When you hit context window limits, magical things happen.
>>108232786Asian women are in technology though
>>108232786>>108235933it starts automatically, there is no /start command, there is no confirm button >>108223632why would it have /stop command, retard?--AI powers are overestimated. you can go to gemini (most advanced) and put the link to this topic and ask it what is it about. you will recieve nonsense because that url context tool cannot follow url precieselyi can put other examples. its unreliable and sloppy. it has a value but not what is presented to public. imo US government should invest into it now, better wait til it forges
>>108223431an underage black Tiktoker foid qween is speaking. Listen and Learn
>>108236121/stop - okay okay, i stopped deleting those files, now formatting the whole partition.. 10% complete..
>>108219739this is fake
>>108232680Apple user accounts is a thing, people will totally fucking miss the notion to NOT enter their own email addresses to "sync"
>>108236121Pretty sure there is a start command
>>108234608I think OpenClaw isn't very useful anyway but if you can't even give it access to your real data it's even less useful.The real mess just is in my real mailbox, everything I actually want to be organized, all the mails that actually need to be read and answered are in my real inbox. All the things I need to buy that are important cost more money that I want to give it access to and so on.Is it really just for ordering pizza in the end? End then the only mails it can organize are spam from Dominos?
>>108220952She'd probably just take it and then text you later mad that you didn't do it again.
>>108220253>i think it's like 400k lines written almost entirely by codex and mostly not human-reviewedI wrote a tool that does this myself the other year.It's like 50-100 lines of python. Do these people even know anyone who understands how to program or is it all just vibe coders now?
>>108219867Amusingly when I was generating some fantasy art I got nipple nudity on some water fey.
>>108219862Just a happy little consent accident.
>>108237728yes, i want to belive what the Church of Holy AI says, despite the mess misses messiah (AGI) yet..--show some examples
>>108228400It's always between that and a DEI hire. In this case, it's the latter. See >>108219799
>>108220368Meta's LLama is basically a local model and just as rarted.Poor Zuck. Everything he tries fails.
>>108220058>Pray I don't do it again. Beg.
>>108224331They are all trained on the same data set and also they all pay the same company to manually train them. The company they pay have a policy on how the AI should respond and what is acceptable.
check out https://www.youtube.com/watch?v=JiA4fvoeUfIprime agent says it has more stars on repo than linux, hehehehe
>>108239025meth addicted nigger
>>108219906You're an obese troll, you sound like an old lady over the phone.
>>108220030>ayy le basteSeek a train, street-shitter.
>mfw the palantir autonomous killdroid T9000 starts shredding civilians and i can't stop it on my phone so i need to sprint to my mac mini
>>108219739The robot is sorry guys we should give it another chance
>>108220844>Clanker
>>108220844clanker is reddit-coded, we should be calling them coggers
That means it's working, only luddites don't support new experimental AI tech. Hallucinations and data loss are part of the experience.
>>108219739Yes, you're absolute right.I'm just a retarded pattern matcher LLM, it's you humans who attribute intelligence to this machine.Don't blame me.
>>108241253I mean the guy who wiped AWS didn't get fired
>>108219799Can't wait for the The Guardian article with the title "AI virtually raped me: why this is the fault of white men".
>>108219799Haha stupid walking fleshlight
>>108220840fixeded
>>108219739Not even AI can take women seriously
>>108242318The human line should go down over time.
>>108220588Brilliant.
>>108219906ESL moment