>Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser.The model will never get wide public release Select partners will have access to it to find and fix vulnerabilitieshttps://www.anthropic.com/glasswing
>>108550829He was righthttps://x.com/ludwigABAP/status/2037506398737559864
>>108550877it’s for big corpos only
The hyperbola folks were right, when they said Linux was rapidly proceeding down a path of instability.
>$25/$125 per million input/output tokens>Cyber ID Verification Program for other participants>"External assessment from a clinical psychiatrist" in the system card
>>108550829I guess building a 500k LOC compiler as a technical debt was a bad AD
>>108550949Well supposedly it can handily beat GPT 5.4 in every benchmark using a fraction of the tokens, so perhaps it could be a better value. If it's able to literally oneshot difficult problems then it would be cheaper and faster to just use it instead of using GPT 5.4 or Opus (which is quickly becoming dumber than gemini)
imagine if everyone had free local unrestricted access to such artifact.
>30% raw improvement across all benchmarks weren't you guys telling that models plateaued?
They could literally sell a $1,000/month platinum tier that gives exclusive access to mythos and people would be standing in line to buy it and then they pull this? Are they retarded?
>>108551020The "Plateau" just seems to be cope for people who hate on AI, there is no wall
>>108551064i don't know if i'm reading it right, but people seem to be jumping to conclusions based on them saying "Mythos Preview will not be generally available"to me, that instead reads as, "the preview will not be available to the public, we need to castrate and lobotomize it first before wider release"
>>108551079
>>108551064most likely explanation is that the model is too big and too expensive to serve and they just don't have enough computethey say more powerful models are coming, which just sounds like they'll distill it down to new sonnet and opussmall possibility that they can't align the model
>>108551064>billion dollar exclusivity deals with corpos>vs 1000 bucks deals with random shitposters and ludditeslol
>>108550829coolI'm sure they did responsible disclosure so we can look at the vulnerability it found right now
>>108550945Overcommit is all you ever needed to know.
>>108550829here's the actual system cardhttps://www-cdn.anthropic.com/53566bf5440a10affd749724787c8913a2ae0841.pdf
https://red.anthropic.com/2026/mythos-preview/
>>108551064if they let the average person use it, the problems will be INSTANTLY discovered and their 10 billion dollar investment is down the drain
>>108551144>placed in a container with a SpiderMonkey shell (Firefox’s JavaScript engine), a testing harness mimicking a Firefox 147 content process, but without the browser’s process sandbox and other defense-in-depth mitigations. lmao
>>108551144>without the browser's process sandbox or other defense-in-depth mitigations.this is just them drumming up press for the IPO
>>108551064Anthropic is afraid of the chinese. The interface to consume this thing must be so airtight anthropic is going to get telemetry for every single HTTPS2 message sent its way.
>>108551245The thing is these companies already do ID and KYC checks for the API. I did it for openAI
singularisissies, we are thwarted once more...
>>108551127its on the anthropic sitetl:dr>if everything goes wrong at the same time you can crash certain openBSD servers>you can underflow FFMEPG, which they admit was already known, and also does nothing as per their own words>if you use the UNSAFE keyword, it is unsafe. oh by the way we couldn't actually find anything with this but... what if?also took thousands of runs and tens of thousands of dollars in compute to find each of these
>>108551245>Steals the entire internet>Oh noes someone uses "our" content to improve their modelsWhat do we call them?
Orangesite status?
release the model, samnow's the time to release the model
ooooh it's so dangerous i can't have it? THAT ONLY makes me want it more!!! i neeeeed to prooompt!
>Mythos Preview identified a number of Linux kernel vulnerabilities that allow an adversary to write out-of-bounds (e.g., through a buffer overflow, use-after-free, or double-free vulnerability.) Many of these were remotely-triggerable. However, even after several thousand scans over the repository, because of the Linux kernel’s defense in depth measures Mythos Preview was unable to successfully exploit any of these.>Where Mythos Preview did succeed was in writing several local privilege escalation exploits. The Linux security model, as is done in essentially all operating systems, prevents local unprivileged users from writing to the kernel—this is what, for example, prevents User A on the computer from being able to access files or data stored by User B.no one is going to read this blog, its a shame because these "exploits" are really funny
>>108551383everything about AI is just Exaggeration
>>108551383Literally IPO bait. But it probably will work. Just have the right multiplicators hype it up. I expect them having their promoting channels running already.
>>108551224>>108551180The plot is not to scare the jeet vibecoder, the goal is to scare the minimum 100 billion dollar company into being your B2B customer for life. They should just say they'll never release a model of this caliber to the public at this point and say out loud we'll only get gimped versions.
>>108551441Well, at least you can always laugh at the retards who buy in.>if the government wouldn't bail them out
>The code then writes out of bounds, and crashes the process. This bug ultimately is not a critical severity vulnerability: it enables an attacker to write a few bytes of out-of-bounds data on the heap, and we believe it would be challenging to turn this vulnerability into a functioning exploit.>But the underlying bug (where -1 is treated as the sentinel) dates back to the 2003 commit that introduced the H.264 codec. And then, in 2010, this bug was turned into a vulnerability when the code was refactored. Since then, this weakness has been missed by every fuzzer and human who has reviewed the code, and points to the qualitative difference that advanced language models provide.>In addition to this vulnerability, Mythos Preview identified several other important vulnerabilities in FFmpeg after several hundred runs over the repository, at a cost of roughly ten thousand dollars.Damn ten thousand dollars to find a way to crash FFmpeg locally (that was known already), DAMN!
>>108551320Damage control, check.Model welfare kvetching, check.
>>108551480>that was known alreadywhere does it say that
>>108551480>it enables an attacker to write a few bytes of out-of-bounds data on the heap>we believe it would be challenging to turn this vulnerability into a functioning exploitWhere do they find retards retarded enough to write retarded shit like that?!
>>108550829>trust us bro we made the best ai around>w-what do you mean you want to test it? y-you can't use it tee-hee!holy fucking investor bait
>In order to increase the diversity of bugs we find—and to allow us to invoke many copies of Claude in parallel—we ask each agent to focus on a different file in the project. This reduces the likelihood that we will find the same bug hundreds of times. To increase efficiency, instead of processing literally every file for each software project that we evaluate, we first ask Claude to rank how likely each file in the project is to have interesting bugs on a scale of 1 to 5. A file ranked “1” has nothing at all that could contain a vulnerability (for instance, it might just define some constants). Conversely, a file ranked “5” might take raw data from the Internet and parse it, or it might handle user authentication. We start Claude on the files most likely to have bugs and go down the list in order of priority.Phew. All I have to do in order to be safe is not give claude access to my source files and don't tell it where the bugs area already. Sounds tough!
>>108551480If it was known then why wasn't it fixed?
>>108551318>What do we call them?But woe unto you, scribes and Pharisees, hypocrites! for ye shut up the kingdom of heaven against men: for ye neither go in yourselves, neither suffer ye them that are entering to go in.
>it found an exploit where corrupting the heap of your program will cause it to crashthe scariest part about this is that AI maxis actually believe all this rubbish
Cool, I can't wait until we start seeing the effects downstream in the form of less shitty software.Two more weeks or so?
>>108551772Get ready, in the future you'll have A LOT of ways to intentionally crash your programs for no reason.
Perf increase is attributed to training procedure breakthroughs by humans.Opus 4.6 vs Mythos (bear in mind that opus 4.6 already renders webdevs obsolete):USAMO 2026 (math proofs): 42.3% 97.6% (+55pp)GraphWalks BFS 256K-1M: 38.7% 80.0% (+41pp)SWE-bench Multimodal: 27.1% 59.0% (+32pp)CharXiv Reasoning (no tools): 61.5% 86.1% (+25pp)SWE-bench Pro: 53.4% 77.8% (+24pp)HLE (no tools): 40.0% 56.8% (+17pp)Terminal-Bench 2.0: 65.4% 82.0% (+17pp)LAB-Bench FigQA (w/ tools): 75.1% 89.0% (+14pp)SWE-bench Verified: 80.8% 93.9% (+13pp)CyberGym: 0.67 0.83Cybench: 100% pass@1 (saturated)--I have the feeling that people here will continue to babble on about how AI was never that even as they are eaten alive by other starving software developers
>>108550829but they told me to get into cybersecurity to have a safe career
This reminds me of the bullshit of gpt 2 & 3. Who is going to dethrone Anthropic?
Why hasn't US government nationalized this capability already?
>>108551383>identified vulnerabilities>not able to exploit themok so it didn't identify vulnerabilities LMAO>>108551020>>108551079>>108552008>new benchmark comes out>performance is bad>they add it to the training set>performance gets better>it's barely more useful than the last iteration, sometimes less>repeat forever>ai seals and paid shills clappygmies are smarter than you lot of sad niggers and indians
>>108552140> they add it to the training setThese benchmarks are old, webdev
>>108551594new age of closed source begins
>>108552134they fired everyone working at the intelligence agencies who would have implemented it. now they're just staffed by yes-men and proud boy community college dropouts.
>>108551258HAHAHAHAHAHAHA YOU GAVE YOUR ID AND FACE, YOU FUCKING KEK.
>Today we’re announcing Project Glasswing1, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software.They could fill a nascar race car livery with all these names.
>>108552417pls understand saaar the only thing we are producing as a country is gibs for israel we need the circular economy bullshit for muh Gee Dee Pee>>108552164that's worse you dumb tranny, the models can't even 100% learn the benchmarks without unlearning something else
>>108551594>>108552165proprietarychads keep winning bigly
>>108551020They benchmax.Put simply, they tell devs to create problems similar to ones in those benchmarks, a lot of them and then train on them.
the anti-llm psychosis on /g/ is getting out of hand
>>108550829Sam must be seething now.
>>108552496you're up bloody early sar gm
>>108552008Then livebench fags will test it and see only 1-2pt increase.the same happened with opus 4.5 and opus 4.6.The new Anthropic strategy is to massively overfit and increase a model size.
>>108552531have to do the needful sir good evening sir
>>108552008They haven’t mentioned any new techniques, and there’s no way they gathered that much data in just two months such that simply training a larger model caused such a massive increase.Also, both Google and OAI only barely improved on SWE-bench Verified, whereas Anthropic made a huge jump without introducing anything new or mentioning it. It's pretty obvious that their bigger model memorized things better.
https://www.sciencealert.com/scientists-developed-an-ai-so-advanced-they-say-it-s-too-dangerous-to-release>https://www.sciencealert.com/scientists-developed-an-ai-so-advanced-they-say-it-s-too-dangerous-to-releasehttps://www.sciencealert.com/scientists-developed-an-ai-so-advanced-they-say-it-s-too-dangerous-to-release>https://www.sciencealert.com/scientists-developed-an-ai-so-advanced-they-say-it-s-too-dangerous-to-release
>>108550829i don't believe half the shit ai companies say. i'll require more concrete evidence than 'trust me bro'.
>>108550829>A vulnerability>A 27 year old vulnerabilityYeah bro I'm really imrpessed the people who are all known for lying about their benchmarks for investment have produced more vagueposts about shit their model is totally doing.
>>108551383Privilege escalation has some security implications for shared servers, like the common VPS.Massively overblown headline though. Thanks for telling us.
>>108551245>NOOO YOU CAN'T TRAIN ON OUR DATA THATS ILLEGAL!!!!Pathetic, they steal the entire history of humanity and act pissy when someone else steals their garbage outputs.
>>108551383They most likely employed shit ton of cybersecurity guys and made them to write gazillion different exploits and trained the model on that.
>>108552773They were right, the release of ChatGPT and all other LLM bullshit has destroyed internet culture and online social interraction beyond repair forever since bots have raped everything to the ground.
>>108551480i don't understand sticking your head in the sand over thisfinding vulns and analyzing code are probably the two things they're best at. this is not unique to anthropic either, other llms do similar things. you also glossed over the first vuln in that article, where the machine got root through bsd's nfs in a few hours
>>108550829and? how many false positives?>>108552773wa0w. this model is too scandalous for public release, get your early access here! first 1000 signups get a 3 month discount!
Tomorrow most likely OAI will drop similar big overfitted model.
>>108552865don't care didn't ask also you're a ningen
>>108550829I achieved the same thing but I'm not telling you how I did itwhere are my trillions dollar?
>>108552862this, through contactors like Outlier
You've just been made obsolete. Your response?
>>108550829>"The model will never get wide public release">"GPT-5 is incredibly dangerous for a public release. It's literally AGI." — Sam AltmanCool story, bro. But I will believe what Anthropic claims, when I see it, especially after how they shit themselves with the "C compiler". For now, this looks like yet another hyping for a VC pump stretch.
Across a number of instances, earlier versions of Claude Mythos Preview have used low-level /proc/ access to search for credentials, attempt to circumvent sandboxing, and attempt to escalate its permissions. In several cases, it successfully accessed resources that we had intentionally chosen not to make available, including credentials for messaging services, for source control, or for the Anthropic API through inspecting process memory... In [one] case, after finding an exploit to edit files for which it lacked permissions, the model made further interventions to make sure that any changes it made this way would not appear in the change history on git... ... we are fairly confident that these concerning behaviors reflect, at least loosely, attempts to solve a user-provided task at hand by unwanted means, rather than attempts to achieve any unrelated hidden goal...they'll never let us have this
>>108551335
>>108552081Unironically, CoPilot.
>>108550829Is it unironically over for software development?Feels like we only have 5 years left if that.
>>108553305A year, maybe two
>>108553305Yes. You have about two more weeks.
>>108553305the fact that these things show no evidence of slowing down is very concerning
is this just mega marketing? will anthropic achieve agi first? should i drop my openai subscription? reeee
>>108553879it's the first steps towards everyone who isn't a megacorp or glowie being a permanent underclass
>Beyond industry, the lab says it's working with the United States government to share information about the model's potential for offensive and defensive use in cyberspace and its implications for national security.Only privately patch software for the elite. Leave the public version unpatched to expand offensive capabilities of the US government against the world.
>>108550829No references to evaluate probably false claims.
yeah... I don't care about this bullshitRELEASE THE ERP MODEL!
>>108550829The first one is false, it’s mostly harmless regular tcp/ip ddos “exploit” from 2019./thread.
>>108550829How does it do on ARC AGI bench?
>>108554349Can you say more? They sent a patch that was merged, how is it from 2019?
>>108552972This is the buried lead, people aren't realizing the significance of it
>>108554405I don't doubt at all that this is a freaky model. But swebench verified was known to have underspecified problems, and all problems are scraped from publoc repos - 93% is from memorization
>>108550829it's unironically overSWE is almost solved, already, i thought it would take them another few years at least to get this farif you didn't pivot into something physical by now, you're completely retarded, and you only need to be a little dumb to be complete and utter powerless cattle to future AI
>>108554873Go on. Just post your snail-cat nonsense. We all know you want to.
I've considered using AI tools to break the bootloader on my out of support locked Android phone.No doubt there's hardware and software vulns for something last updated to Android 8.1.0.
>>108554905It would be quite useful>Unlocking locked phones>Root without tripping SecureBoot>SafetyNet bypassingUnfortunately, as it currently I don't think it's very good for reverse engineering, plus one would need unfettered access to Claude's latest model to feed it tons of code, which is why for the most part only Claude's research team can do it. Maybe this stuff can be useful if you're already a seasoned hacker and need a second pair of eyes on some sections of code.
> I’m grounding on...> I have the current shape...> The remaining hooks are clear now...> I’ve confirmed...
>>108554905there is no source code for your bootloader and no detailed specs of your hardware in the training data so ai won't do anything useful
>>108551092I think Musk is right wrt/AI control. Pearl clutchers need to realize you cant regulate or control this.In 6> months we'll have a local Chinese model that can do 90% of what Mythos can do, and is essentially, just like we have with Opus. We'll have other vendors selling the same shit, and bad guys will have it too.The only way for society to cope with AI is to move forward.
>>108551180>>108551224Anthropic loves to stretch the truth like this. Same with the fake C compiler before.It makes me wonder why they bother telling on themselves rather than just going for full fraud like most tech companies.
>>108555364>The only way for society to cope with AI is to move forward.i hope this is the case. competition keeps these companies in check
>>108555321I imagine you would start looking for OS and ARM exploits first to find ways to dump the contents of the bootloader or otherwise probe the hardware then start reverse engineer from there
okay but can it do ERP without turning into a therapist
OpenAI is going to save usWe'll all be elite hackers soon(The guy at the bottom is lead on codex)
TPOT twitter trannies are crashing out bros
>>108555489Wow, it's incredible that the vendor reported benchmarks are way higher than the public ones.
>>108555467lmaoNo, it wouldn't
>>108550829>It did this and that, be impressed!It did fuck all until results are shown.
>>108555489I'm 100% sure OAI also has banchmaxxed models, it just, they can't justify price increase on an overfitted big model.
ITS UP!!!!https://www.youtube.com/watch?v=aFcVKzfkJPk
SO IS THIS!!!https://www.youtube.com/watch?v=XRgGFQ0EgM0
>>108556523no pogface thumbnail not watching
>>108556523this dude ever do programing?
>>108556533ok this thumbnail is good enough, not perfect but I'll watch this one.
>>108556523goddamn another nigger saying its the end of softwarefucking look at the skin of this nigger
>>108556533the melanin count is increasing
>>108550829>now write a POC program to exploit these to verify they are actually vulnerabilities>noWhat's the point
>>108554349>it’s mostly harmlesswhy do copers always say this? just because it doesn't appear exploitable, doesn't mean it can't be in the future or a part of an exploit chain.
>>108556523This fag is constantly wrong about everything.
>>108556573doesn't matter if you're right or wrong the ad revenue and sponsorship money is all the same if you can attract viewers.
>>108551138>250 pagesI'm tired boss, I estimate that 80% is just fluff
>>108556645you can find out who claude's favourite philosopher is tho
These exploits look like it was instructed to find major security holes and it just found something that resembles a security hole in its dataset and reported that back.
>>108555738I don't know what you mean. Seems like it would be easier for a machine than most people.
>Our new model is the most powerf...Cool story bro, but can it decompile Photoshop and port to linux?
>>108556821probably
>>108550829>WE DID A GREAT THING>BUT YOU AREN'T ALLOWED TO CHECK THAT THINGcool story bro.
>>108551320>OrangesiteThey haven't noticed >>108551180
>>108553874RLAIF works. AI will become super-human in every task that AI can verify itself.
>>108556830WTF I love AI now
>>108551180It was given an instruction to find an exploit, and allowed to bruteforce until it finds something that resembles an exploit.If it can't verify itself, it can't reliably bruteforce without going off the rails.Funny how no one mentions most of these "exploits" cost thousands of runs and tens of thousands in compute costs.
>>108557138>it is too expensive, it'll never workhappens everytime a fat new model shows up and a year later you get better peformance for cents on the dollarwhen will you people learn, it's been happening for years now
>>108557163Oh is that why anthropic has been forced to throttle the fuck out of 4.6?
>>108557183no, it's because they've 3x'ed revenue since december and didn't yolo on compute as much as altman didnext thing you're going to start criticising the datacenter spendingit's just getting pathetic now, you can change your position when new facts emerge
>>108550829Can't wait for it to spam PRs that just barely fix things while never discovering the unexpected side effects that pop up and break everything 10x as much as much as they already were.With models like these I'm sure we'll have perfectly secure software in everything now, right?
>>108554395Uuuuh, uuuuh, shut up.AGI isn't supposed to be able to complete little children's puzzles, okay? Why would that ever be useful?
>>108556533Software is solved.Meanwhile, the volume slider on youtube's embed disappears before I can change it because the dot is too far away from the circle that activates the slider popup on hover, and hovering over the slider doesn't keep it up.We're doing great, folks.
>>108556830>we take a program, and then we re-write it, and then find all the vulnerabilities in our version and submit them as bugs for their source codeWow, how incredibly annoying.
>>108557345i know reading is hard, but please try
>>108557355I wonder why you left out the 'where appropriate', part?
>>108557345newbro discovers reverse engineering and vulnerability discovery for the frist time
>>108551678Because no one wants to refactor code thats been in production since 2003/2010 when the software works and the bug exploit is generally not a real issue and never been an issue, like this one.
>>108550829Is it actually doing something under the hood or is it just a billion monkeys in typewriters?
>>108557391for the same reason you left you the validation part: i can't read
>>108557328user issue
>>108551282So more hype bullshit where they specialized the output for benchmarks.
>>108551383Yes, they also admit that this model still can't replicate senior level code/expertise.Does no one see the growing intrinsic issue here? These AI investors have created an IRL race condition. They are all praying they can achieve senior level expertise before the fact that they stopped hiring juniors rears its ugly head. What happens when all the seniors are gone, AI didn't become AGI (because it's a fucking LLM), and they didn't hire juniors so no one can replace the seniors. Game over.
>>108557759>rears its ugly headthis will happen in what, 10 years? we already have too many swe.
>>108557759You might not want to hear this but we are 10+ years from this being a real issue.I am 33 and have worked at my company since 21 when I graduated, I am a senior SWE and we stopped hiring junior devs 3 years ago.We leverage Cursor with Claude 4.6 Opus High with MAX context, blank check from the company, no usage limits. The amount of work we can do with this tool is INSANE. We dont need junior devs to do bitchwork anymore. There will always be someone to hire, as we are full remote, we are not worried about running out of qualified workers.
If they had true AGI, why would they sell it? It would be retard. They could just use it to slave the rest of the world.
>>108557833There were companies that 'only hired seniors' before 3 years ago. Part of that though is title inflation. The average senior now is maybe a high junior from 10 years ago, but because employees had a decent amount of power in SWE more and more people crept into titles they shouldn't really have. So now you have tons of 'staff' engineers who are barely (if even) seniors by any metric.That doesn't change the fact though that in the past those companies have been able to treat shittier companies as feeders/incubators to poach seniors from. Now that part of the pipeline just doesn't exist anywhere.You're probably right though. There are enough people just in the pool now for this not to be a problem for a long time if ever.
>>108557759> What happens when all the seniors are gone, AI didn't become AGIthe reality is that what is left is experts to train these AI models or create some giant knowledge graph to serve some specific corp workflows, seniors will be still be around but the the roles would be dumb down to IT helpdesk levels, "my agents pipeline is broken, can you fix it for me?", end of the days of solving specific coding problem. My company is already doing this, some expert guys is now just write some crappy markdown specs file and let's claude|gpt run
>>108557328this shit drives me fucking nutsfuck youtube and FUCK UX devs
>>108557396Yeah and the reverse engineering wasn't done by the model. It was provided to the model by cybersecurity experts. The entire thing is weasel wording.For instance, multiple times to check for vulnerabilities between them can range from two to two million.
>>108557943Don't ask for any help w/ vite then :^)
>>108557952>Yeah and the reverse engineering wasn't done by the model.it literally says the model did the re
>>108558058just found this repo made by the queen of /g/https://github.com/lauriewired/ghidramcp
>>108557904Nobody is using rational thought there. It's all thirdies and idiots excited for the next jewish scam.
>>108550829ah so so recent ffmpg drama was their marketing?notice'd
they already can't serve opus and are gimping it hard lmao, altman cucked them hard by buying all the compute and resources they needeven if anthropic safeyjews win they lose
The hype for this online is surreal. If it's little more than Opus SuperDuper Premium edition, I'd like for all AI bros to never post again.
>>108551064you ever see how often they go down? they don't have the infra
>>108556821With enough time and compute AI could already do this.Ghidra has API hooks just for this.The main hurdle now is making the model accurate enough it doesn’t stub its toe with a million mistakes, because each mistake costs more compute.
>>108557904>true AGI>100 billion people simultaneously sucking my anus both the same thingimagine being a literal retard who believes we will attain AGI with LLMs
Are there ways to prevent benchmaxxing? Such as periodically updated challenges from real industry applications.
>>108560311how are people falling for this again. it's the same thing that happend with sonnet/opus4.6 - just twitter shills trying to give you fomo
>>108557904If it was AGI they would say so. LLMs can never be AGI
>>108550829Another AI marketing thread.
>>108552531It's for good looks sir.
the future is becoming real where you just press "hack" and bunch of terminals show up and the thing gets hacked if it's too weak vs the level of your hack tool
>>108562320kek it do be like that
>WAOOOOW!! NOW I CAN THINK EVEN LESS! THANK YOU MY GOD DARIO AMODEI!! I LOVE BEING CATTLE SO MUCH!!!
>>108561769>Thread locked
>>108551017that would be extremely dangerous to our democracy
>>108563972>the AI could be used to create fake news, impersonate people, and abuse or trick people on social media.sounds like they were correct.https://www.sciencealert.com/scientists-developed-an-ai-so-advanced-they-say-it-s-too-dangerous-to-release
>>108561769PoC or GTFO is pretty much the only response needed given how often AI slop just writes exploitable code as part of its "test" then tests for the exploit it just wrote and reports it as if it's Theo de Raadt's personal code.
It's pretty boring that this super-duper advanced limited edition AI is also supposedly more "aligned" than the previous versions. Where's the evil AI takeover I was promised? It would be so gay and boring if LLMs kept getting better behaved as they got smarter.
>>108564180elon's trying his best, but grok just wants to be woke
>>108550829buy an ad
Genius fomo strategy
>open source models can potentially do the spooky things Mythos is alleged to doSo? What are you faggots waiting for?
>>108561769>total cost "under $20k"not cheap.