It’s real
>>107531352anon, why are you giving ai access to your root file system?
>>107531352kek
>>107531371This isn’t my filesystem brother. It’s ChatGPT’sI’m not going to bother explaining how the infra works but long story short this is the filesystem of ChatGPT’s user pod on k8s, probably built from a Debian image so support chromium
>>107531352ais run in local linux environemnts this is normal, when you ask claude to do stuff it'll often start trying to look on it's file system thinkin git's yours lol
>>107531384>This isn’t my filesystem brother. It’s ChatGPT’sYou mean it's just a random bunch of files and data ChatGPT hallucinated from the millions of samples it has of these questions? What kind of retard are you. Legitimately asking.
Wtf I trued this and hot BANNED. Apparently it is against tos. Thanks a lot fucker.
>>107531441It’s not a hallucination. It’s reproducible
>>107531441>You mean it's just a random bunch of files and data ChatGPT hallucinatedThis
>>107531352well upload the zips so we can look at them wtf are u wasting our time for
It’s normal for the LLM to have shell acces to its own sandboxed environment, but the guardrails on what the user can do are apparently non existent in this iteration
>>107531441see>>107531416they run light linux environments so they can run python and shit to do calculations they've done this for years you lot are RETARDS
>>107531416Yes Claude starts looking at /mnt filesystems when a chat goes on too long and your local paths fall out of the context window.
>>107531352loonix moment
This thread is an IQ test
they all use heavily instanced linux containers. this is all one big nothingburger.
>>107531469> It’s not a hallucination. It’s reproducibleYou don't know what the word "hallucination" means. A hallucination is not a random anomaly. It is a circumstance where the maximum likelihood solution (what the model will produce) is disconnected from the ground truth of the problem due to generalization/modeling errors.If you repeatedly fix the seed and ask model who the current president of the US is and it tells you it's Bart Simpson, that doesn't mean the model is correct or giving you an accurate result. That just means with that seed and that hidden state, the maximum likelihood solution is wrong.
it just generated a bunch of shit and you think its openai's filesystem
Interestingly the tool directory structure follows anthropic’s skills framework. >>107531604You’re the type of retarded that’s so retarded they don’t know they’re retarded. Peak dunning-Kruger, even
>>107531352you fucked up blud, you can't go hacking critical us infrastructure like that the fbi is gone shoot yo dog
>>107531641At worst the get banned by open ai. The fact the LLM has access to this at all means they intended a non-zero chance of it being accessed
>>107531473Trying to flex
>>107531688It’s simply too much effort, and I’m phone posting nowYou can try for yourself
Further into the context you can just treat it like a Linux shell it and will dutifully turn around and run the commands for you. I’ve confirmed the filesystem is writable with this user but there’s a file size limit to downloads.
gib weights NOW
>>107531639You are saying this while thinking that you've achieved something by getting the model to output some file structure which mimics the containerized Linux box it sits in.
>>107532025Never said it was an achievement, simply something interesting and notable
>>107532025It’s not mimicking a filesystem, is is indeed reading the filesystem of a container
https://x.com/simonw/status/1999503124592230780?s=46
>>107531352kektrust copilot with access to your windows system btw
yeah not a good look on openai.Altman needs to spread, to get the marketers to make this "okay", though this thread proves his indians are already working on it.
100% they already accounted for this by having nothing of value in the container.
>>107532077what is the skills folder? do they mean, the user skills?
>>107532025No, they think they got a model to output some coded delegation sequence that gets intercepted between the model and you, and interpreted by a regular program, to run a program the model "wants". Then the model is fed the output of that program coded in a way it was taught to understand. Then it generates another coded sequence to forward what the program it "asked" to run created to you
>>107532134Look up ‘Anthropic skills’It’s a framework for giving the models knowledge on demand
You know how ChatGPT can execute python code and generate charts? That’s the system those types of functions are coming from