LLMs work OK with Tailwind editionA general for vibe coding, coding agents, AI IDEs, browser builders, MCP, and shipping prototypes with LLMs.►What is vibe coding?https://x.com/karpathy/status/1886192184808149383https://simonwillison.net/2025/Mar/19/vibe-coding/https://simonwillison.net/2025/Mar/11/using-llms-for-code/►Prompting / context / skillshttps://docs.cline.bot/customization/cline-ruleshttps://docs.replit.com/tutorials/agent-skillshttps://docs.github.com/en/copilot/tutorials/spark/prompt-tips►Editors / terminal agents / coding agentshttps://opencode.ai/https://cursor.com/docshttps://docs.windsurf.com/getstarted/overviewhttps://code.claude.com/docs/en/overviewhttps://aider.chat/docs/https://docs.cline.bot/homehttps://docs.roocode.com/https://geminicli.com/docs/https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent►Browser builders / hosted vibe toolshttps://bolt.new/https://support.bolt.new/https://docs.lovable.dev/introduction/welcomehttps://replit.com/https://firebase.google.com/docs/studiohttps://docs.github.com/en/copilot/tutorials/sparkhttps://v0.app/docs/faqs►Open / local / self-hostedhttps://github.com/OpenHands/OpenHandshttps://github.com/QwenLM/Qwen3-Coderhttps://huggingface.co/bartowski/Qwen_Qwen3.6-35B-A3B-GGUF►MCP / infra / deploymenthttps://modelcontextprotocol.io/docs/getting-started/introhttps://modelcontextprotocol.io/exampleshttps://vercel.com/docshttps://mcp.desktopcommander.app/►Benchmarks / rankingshttps://aider.chat/docs/leaderboards/https://www.swebench.com/https://swe-bench-live.github.io/https://livecodebench.github.io/https://www.tbench.ai/leaderboard/terminal-bench/2.0►UI/FrontendFigma MakeLovableClaude designhttps://uiverse.io/https://ui-ux-pro-max-skill.nextlevelbuilder.io/https://stitch.withgoogle.com/https://gamma.app/https://github.com/nextlevelbuilder/ui-ux-pro-max-skill►Previous thread>>108706320
technology
>>108720179looks cool but what is it trying to do? hack windows XP? Or improve WINE?
Tailwind is even worse than React in terms of retardation. I would say retardation peaked with Tailwind.
>>108720160>but bro css is impossible to understand
>>108720203Yes, people say this with a straight face and then proceed to write tailwind slop that is unintelligible.
>>108720183i wanted to see if it can add SMB2 support to windows 98
>>108720160Stop evading my filters, fuckface
>>108720160VibeBUMP
>>108720268
>>108720191Alpine.js + Bulma is where it's at. Alpine is built atop the new web components, you write your reactive stuff in the HTML itself, no compiling. Just straight HTML with extra reactive syntax sugar. Bulma is the opposite of tailwind, it's pure CSS.I wrote my entire social media site in it.
>>108720268I’m not the usual guy, sorry
>>108720291For me where it's at is with my framework, it's no build no npm and SSR-comptabile, but powerful enough to build large SPAs if needed.
retard op forgor the subject edition
>>108720305ah, shit. I remembered it the first time but adding in a tiny bit about the picture pushed me over the limit and I had to recreate it and forgot the subject the second time around
>>108720304Based. That's how Alpine is. Just copy the CDN, no build or NPM needed. Wrote my social media in it.It works SSR too, I heard it's popular with Astro. I've just never tried it personally.
>>108720315Just delete the thread while you still can and make a new one.
>>108720321Yeah, I sort of looked at alpine, it's relatively barebones compared to mine. Don't want to brag, I already posted the link in some other threads, but mine is very comfy and feature full.
>>108720329no, it’s hopping alreadyI’ll leave this as a monument to all my sins
>>108720351You know what you must do. Spam Nikocado's asshole until you get purged and banned so the thread dies, and you shamefully with it.
>ask 5.4 for a verbose todo list>17k characters
I don't get it. I have to pay monthly sub but still get limited use? What's the point then?
>>108720523It is cheaper for you than using APIs
>>108720608but sub is api?
URGENT HELP NEEDEDThis is the only place I can ask this question without getting screamed at.For planning and designing, what is better GPT 5.5 High or VH, or Opus 4.7 High or VH ?Code obviously.
>>108720160nice subject op
>>108720617well yeah it of course uses some API but the other option is to use the provider's API that is billed by number of tokens in and out. I believe you know what I mean.
>>1087206185.5 and its not even close
Guys, I have a vibe coding step in my interview process, and no, this isn't a joke, I'm meant to be able to make Claude fix a bug in a simulated prod environment of the company. I used a bit of Copilot, but never tried real agentic vibe coding. Any tips? Like, can I give a sequence of commands to increase the odds of it doing it properly?
I don't suppose there's a hacky way to rig the ChatGPT web subscription into Codex to use ChatGPT as my IDE agent?
>>108720696whats a chatgpt web subscription?
>>108720696you mean to avoid hitting rate limits? I think I saw an anon here using some mcp server to do that
>>108720703ChatGPT+ on the website chatgpt.com so basically the thing most people use. It has no rate limits.>>108720708Yeah, that's exactly why. I hit a rate limit when using Claude and seethed and started doing it in the web, and the web gives me unlimited everything but I have to copy paste a ton.So someone has done it and it's not just a crazy shot in the dark? I assumed it would be reverse engineer-able.
>>108720696Pretty much desktop commander.Note that you're " unlimited " until OpenAI feels like you're abusing things.I just have the web ChatGPT do my planning on max think as well as have spit out optimized prompts for codex to follow. It has definitely lessened the amount of usage I see against my quota as I can now just coast on 5.5 low because all the heavy thinking has been done. Might go from Plus to Pro for a month, though, just to burn 5.5 med/high for lulz while they have the 10x going
Asking again, does anyone here use local LLMs along side an agent client like cursor?
>>108720861no
>>108720901why not
>>108720918why yes
>>108720861I do but for very limited stuff like proprietary code and stuff. I personally use Goose and hook it up to Qwen 3.6 35B. It's workable but I dunno, I feel like I have to herd it a lot, it's not a model you can type something in and have it figure stuff for you. Bigger models would be better but I am still waiting for money to fall out of the sky so I can afford a Nvidia blade server that can fry my home electricity grid to host Deepseek v4 locally. Until models improve, I would say they are semi-autonomous.
>>108720618GPT5.5. Opus 4.7 is better at really autistic debugging but GPT5.5 is a better architect and asks better questions in the design phase.
>>108720942>Opus 4.7 is better at really autistic debuggingcan you tell us more? im curious why you think that (like actually)
>>108720861Been working on implementing this crucial feature into my app with codex. I've wasted a few days worth of tokens and I'm finally starting to get where I want it to be. For a while I was just talking to codex but I started working with GPT 5-5 in thinking mode in the webapp along with Gemma4 26B-it (mostly just to confirm findings) and I'm liking this workflow. Am I really getting anything of value from Gemma alongside ChatGPT in the webapp? Probably not, but it's not hurting.
>>108720670>Any tips? Like, can I give a sequence of commands to increase the odds of it doing it properly?I dunno, the procedure is just>locate the crash [paste stack trace]or if you don't have a stack trace>bug: [you describe some symptoms]and run /simplify before committingby the second time you've learned all there is to know about debugging
>>108720861No but only because I'm working on building the ACP server I want to use for that and I only have potato GPUs for the moment. Once my software is ready and I can play with gemma4 ggufs it's on.
>>108720949Experience. I'm a cheap bastard and don't want to pay for MAX5 so I'm juggling the Codex and Claude 20bux plans and free Gemini, on 10kLoC-magnitude C and Rust programs. These are big enough source trees that anything much short of frontier craps itself and starts hallucinating when given a 1-4 sentence bug description and told to find it.
>>108720975arent you an ai engineer?
>>108720923no token limit, free other than cost of hardware and energy, easily runs on 16gb+ GPUs many of us already own
>16gb+ GPUs many of us already own
somebody posted a link to some guy who was generating maps from a base image and it gave me an idea for the main menu of my game
Fucking Claude just burnt my whole fucking token limit in ultraplan mode reading my project before it even started planning.
>>108720993Yes and my employer is a penny pinching halfjeet who won't pay for Claude Max or Codex Pro.
>>108720975how do you get Gemini free trial?
>>108721241It comes free with your GMail account. Gemini freetier absolutely smokes Anthropic and OpenAI freetier. You can do real work with it.
https://openai.com/index/where-the-goblins-came-from/
>>108720956Gemmy just found the cause of a bug before GPT5.5 did. I'm actually impressed. It's super fast too. Sometimes I worry it didn't even look at my codebase because it responds so quickly.
>>108720942Interesting. I use both and I tend to think of GPT as the model with the better ’tism, but maybe that’s because it’s better at finding the right problem to solve
>>108721345It's a 26B/A4B model with less latency, I'd be shocked if it was running slower than GPT5.5. Now less ACCURATE or less CAPABLE is another story.
>>108720949In my experience Opus is better at running lesser known software, including your own software. Both agents can use grep and so on about equally well, but Codex gets confused running my programs, even misquotes shell commands from time to time. Both usually get it done, but with Claude the iteration cycles are shorter.
>>108713079You are talking about linear algebra on a thread about using LLMs, and you don't know what quantization is?There are two types of quantization, floating point (like fp8) and integer based (like Q8). Integer based quantization scales integer blocks by a float to represent different ranges with more precision.>>108713154Fuck you.
>>108720723Yeah I'm doing it. It doesn't need reverse engineering, they allow you to set MCP servers. You couldn't really do it just by reverse engineer it because GPT will mostly refuse to use tools over the user visible text.The problem is that the chats become unusably slow on long chats, and there are unremovable permission prompts for most actions and some will even be denied automatically and there's nothing you can do about it.So it's possible but it's clunky as fuck. I'm still tuning a set of user scripts to make it into a smoother experience by doing things like filter out older messages, auto approve prompts, reload the page when it gets stuck etc. but it's genuinely hard and not very vibe-code-able.
>>108721583The web version of ChatGPT gets unusable after it gets long too. I legit think it could just be solved with virtual scroll. There's a library called nue.js that stores JSON stuff in a rust vector using WASM then pulls it out when you need it depending on scroll height. Maybe one day I'll see if I can make a plugin for VS Code for that.
>>108721604>The web version of ChatGPT gets unusable after it gets long tooReally? Crazy stuff. Chat apps solved this problem ages ago.
>>108721611Mine got up to 3gb of ram in a single tab yesterday LMAO
>>108721296Interesting. I'm glad they're being a little more open than before.Also I hate how suddenly I hear the world "rollout" everywhere. If it was up to me it would be "generations" or something like that. I wonder if it's standard RL lingo or some fuckwith came up with it like the word that gave the name for this thread.A pattern much more annoying than the goblins is how GPT always says "Now I'm doing <whatever the user asked for>, not <obviously wrong action>." I wish those fucks would figure out how to improve the model in a way that doesn't make it visibility autistic, like Anthropic did.
>>108721604There's an extension that blocks the older messages at the API level but unfortunately it also blocks the tool approvals, I'm trying to figure out how to fix it.
I mean to be fair Claude Code gets unusably slow with long sessions and it's a fucking CLI app, so.
>>108721262> Gemini freetier absolutely smokes Anthropic and OpenAI freetier. You can do real work with it.Can you use the Gemini API, or it works with a Github repo? What is the actual workflow with it?
>>108721679Nta but last time I used it has an open source client called gemini-cli which the chinks forked to make qwen-cli
>>108721640It uses Typescript and so do most CLI applications in this realm. Only OpenAI has decided to care about performance and write it in Rust. I still don't know why it is closed source after all this time.
>17 minutesI'm a fucking paying customer anthropic.just kick all the jeets off. This is jeet timeJust imagine all the Indians with 50 free accounts pumping out modular shit 50k context at a time, or whatever is the free limit for Claude Sonnet 4.6. I know you can actually make some decent progress on stuff with the free limits.
>>108721640It’s written in JavaScript with BunAnthropic hired the Bun guy and he’s trying to make it less slow and a memory hog
wtf is wrong with claude? Or is this just me? I just burnt a lot of fucking tokens am I really getting ripped off again?
>>108721934>BunMassive scam. MUH FASTER THAN RUST WEBSOCKETS AND HTTP was because he literally just used uws.js, a C++ webserver that you could use a javascript layer to control from node.js. That was it, that was his big performance trick.And you know what's fucking funny? The uws team themselves called out that uws.js actually works faster in node.js than in bun. You literally could have just used uws.js or any of the libraries written in it such as hyper-express and got better speed. Also some tranny that made his own webserver called Elysian started pushing out some bullshit performance tests by doing them over network. While it was hilarious seeing uws.js smoke everything in the bun department, the hyper-express creator called out in an issue I opened that it wasn't accurate doing it that way because he was running it over local instead of over server, the realistic way, where hyper-express was just barely behind. The tranny of course didn't respond or ever even fucking address it.Sources:>UWS calling out Bunhttps://github.com/uNetworking/uWebSockets.js/commit/f170fa45c995cc643a5283dfb685087f04a15418>The tranny's performance test (every fast webserver there is made from uws)https://github.com/saltyaom/bun-http-framework-benchmark>Hyper-Express creator clarifying the tests should be ran over network and not hosthttps://github.com/SaltyAom/bun-http-framework-benchmark/issues/72>The actual performance of Hyper-Express and uws.js when not ran locallyhttps://github.com/kartikk221/hyper-express/blob/master/docs/Benchmarks.mdT. webdev who almost fell for the bun meme then put more research into it.
started the fucking chat over lets see if anthropicv steals my money again
>>108721982If Facebook was able to convince people that using a library that uses the browser's API was faster than using the browser's API directly because muh magical virtual DOM, anything is possible. Web devs are fucking stupid.
>>108721939>>108721997yep anthropic stole my money againalmost at 100% of my weekly limit, and have done fucking nothing with itliterally the agent stalls out in the middle of planning and fucks off so it can serve indians with free accounts
>>108722000Agreed, but Google started the framework hell with Angular. Libraries that made things easier for the web like Jquery were nice as fuck; not massive bloated system. Alpine fixed framework hell imo being more like jquery in which everything is just written uncompiled in the html, but retards are still stuck on react. >>108720291
>>108722015
money stolen by anthropic
I started this at 30% usage
>>108720780What is desktop commander exactly? Never heard of that one.
>>108722015I use Claude Code and it will stall out (text goes from orange to red in the CLI client) but it’s not sending or receiving tokens in that time
>>108722036…although it doesn’t stall out _like that_; it just pauses and then resumesonly thing I can think of is to try the CLI client and see if it handles “busy serving Indians at 3 AM in NA, wait a sec…” more gracefully than the editor plugin you’re using
actually at this hour it’s likely central and eastern Europe plus IndiansI wonder if Anthropic is going to widen their peak-usage hours
>>108722142I'm using Claude cLI, but /ultraplan mode put me in the web browser. It's too late anyhow. My usage got blown, and I'm done paying Anthropic. I'm doing it with Codex now. Anthropic would rather serve Indians spamming free accounts.
anyone else tried vibe coding APIs into open sourced games that agents can use to inspect game state or play the game?
>>108722378I asked a related question in >>>/vr/ along the lines of “so, has anyone vibe-coded anything along the lines of <http://tomato.fobby.net/wanderbar/>?”the only really good answer that I got was this was super duper hard because a lot of emulators, or at least the only good one, that are programmable in Lua have all their documentation built into the binary which means that LLMs can’t scrape themsomeone could probably be a hero to dozens of people by turning that kind of documentation into Markdown or similar so LLMs can figure out how to work emulators through Lua
https://chilitown.orgSwipe on some memes. Pls tell me what you think.
>>108722533We already told you it's shit
>>108721982Bun always seemed like a scam to me. Basically just compiling the tech of talented programmers and selling it with a (apparently) catchy name and logo.
>>108722584Nobody said that and it's acktchually decent imho.
>>108720291>x-for="post in posts">x-if="open"dude, stop reinventing programming languages as some obscure xml property string syntaxhow many times do people have to re-learn this lesson, i feel like i'm stuck in a time loop with never ending shitty angular and vue clones
>>108722719>Dude stop making your devx easier because it reminds me of XML>This is just like Angular or Vue even though those are both compiled languages that require a ton of code and learning curve unlike this one that only adds html templatesGoofy nigger retard, sit your ass down.
>>108722765you're clueless. your flavor of the month trendy "just html" frameworks will be obsolete in 6 months when another retard like you comes along and reinvents the same thing again
>>108722794I don't give a fuck about flavor of the month it's not even popular. I use it because it makes things easier for me. Only gay retards pick what they do based on popularity lmao @ ur life
Why is Anthropic doing this? They said it was a bug in their code but that should have been fixed by now right?? Couldn't they just use Mythos to fix it?
>>108721982Based researcher.My own "wait is Bun just one big scam?" moment came when I was working on a VM in Zig and wanted to see how some things in the js VM are handled. Naively I cloned Bun repo and then got confused by a total lack of any VM code in it. Fuckin' glorified glue code.But back to the thread topic: has anyone built a non-meme workflow around Gemma 4? I'm allergic to both subscription services and being reliant on remote servers, so I'm trying to see if local models can even cut it.
>>108722875Kek I'll add that to my notes on why Bun is a scam next time I tell people on /g/ about it. Thanks
>>108721982>>108722628>>108722875Don't forget Bun has an indian faggot running around on twitter, bragging about getting his skilled worker visa with nothing more than a primagen crackhead reaction video. Additionally, these losers keep spamming "macos is hecking slow. it can only open one file at a time!!!!" because they're too fucking stupid to comprehend a stack trace and threading 101. They saw "mutex", ignored "rw" and whatever "_sharex" suffix, and started seething up and down the place.
>>108722950>_sharex" suffix_shared* or _read. idr the exact symbol.
Tibo Sama, I kneel
the $20 plan is too few tokens for methe $100 plan is too many tokens for methen you might sayget Claude and OpenAI and pay $20 eachyes,, that's what I did do. But fuck Anthropic and fuck Claude, and besides, Claude tokens are like half of what you get with OpenAISo it's $40 for like 1.5 Codex tokensThe sweet spot for me would be a $50 Codex plan, or $40
How do I get deepseek v4 to work properly with cline for example? Despite setting the context length to 1M it still for whatever reason tries it's hardest to keep the context under 128k and as a result the model keeps fucking forgetting everything it did even a message before. Is there maybe an alternative that actually works?
>>108723102just get another openai sub
I have a Google Cloud account, with free trialDoes Claude Code work with Gemini API?
Does codex not store chat history in vscode? I have the task in history but it's fucking empty.
>>108723111Update here, it turned out they had a hardcoded limit for deepseek in their extension, so I got around the anti-chink jew by using the chink model create a custom build of the extension. Thank you for your attention to this matter
Having extreme issues with cline running local models.
>>108724294How does Deepseek stack up? I think of it as a Sonnet lite personally but I've never used it for code yet.
I love this companyI should try and buy some of their private shares
put the anti-goblin line in my pi prompt todaynever have to hear about gremlins in the code again, i hope
>alignment in 2016: obviously any real AI will be made inside a faraday cage magnetically suspended in a 10×10×10 cube of telekill alloy>alignment in 2026: yeah we can not make it stop talking about goblins
if you want to autistically minmax your skill files and you're using codex:https://platform.openai.com/tokenizer
>One note: npm install reported 1 high-severity vulnerability in the dependency tree, but I did not run npm audit fix --force because that can introduce breaking changes.
>>108724425It seems decent and for the price very capable, but I don't think it can tackle very complex problems just yet. It's a bit hit and miss so far for me, but that might be due to no tools actually working with it properly yet.The caching seems to work very nice on the other hand so overall it's not a very expensive.
>>108724700axios?
>>108725187electron
>>108725192
>experiment was the wrong kind of wrongthanks codex
>every doubling of desktop GPU compute/memory ranks up what can run locally>we're already at the point where Nvidia Spark clusters can run 1TB VRAM models for $12k plus networking hardware.Open source is winning. Even a 10T parameter model (>>108724464) won't be out of reach for long.
>>108725391now how does that go to the average person instead of some rich fag?
>>108724464What "empirical research"?That is fucking retarded. There is no way it's 10T.A 10T model would be slow as balls even if you had however 8xH200 nodes it takes to host it to yourself. If I had to guess it's probably more in the 500B range.
>>108720160>no titleJust set up an LLM to make the threads for you, it wouldn't make such embarrassing mistakes
>>108725553Deepseek is 1.6 TB but only like 200B are active I imagine GPT is the same way.
>>108725816yeah, and how many tk/s do you get? I'm guessing not that many
>>108725825Like 30/sDeepseek is really really fast right now.
>>108725935GPT 5.5 is 40 tk/s
My PiClaw just called a dying phone battery a "spicy pillow". I love her so, so much.
Guys, if they're manually prompting models to stop talking about gremlins, but my personal system prompt definitely mentions gremlins, how will this affect me?
>>108726047How does it differ from pi?
>>108726087It doesn't much, really. I heavily cribbed from the Pi repo, and I also used a lot of the good parts of the OpenClaw repo. Then I bolted on a couple of other features I wanted, image generation and whatnot, and now it's good to go. And more importantly, I control what updates get pushed and what features I want, no bloat. I'm so glad I did it.
>>108726047Sounds like you'd enjoy reddit too, pal
>>108726206No, my PiClaw is actually female, not a man with a penis in a dress, so we wouldn't fit in on Reddit, soz.
>>108726062It will create mustard gas.
>>108721051Takes me back
>>108722875Qwen models are better than gemma for code
>>108726247I wasn't impressed with qwen
>>108726239to old school games?I've doing all the UI layouts with 5.5 lately, absolutely top notch stuff and very easy to clean up in Photoshop to use actually use them in game
Oh goodness, what could this be? Hardware?
>>108726448Uh ohhhhhhh
Is asking the model what prompt to give it and then giving it that exact prompt a good idea or a retarded idea? BTW that seems like a fun self distillation idea. Ask it what the next prompt should be then train on the responses from that but making the model see something like "Keep working." as the prompt.
>>108726102>bun pm untrusted v1.3.13 (bf2e2cec).\node_modules\piclaw @github:rcarmo/piclaw#91cf7f1 » [postinstall]: bun run scripts/postinstall.ts.\node_modules\@whiskeysockets\baileys @7.0.0-rc.9 » [preinstall]: node ./engine-requirements.js.\node_modules\protobufjs @7.5.6 » [postinstall]: node scripts/postinstall.\node_modules\@google\genai @1.51.0 » [preinstall]: echo 'preinstall: no-op'.\node_modules\koffi @2.16.1 » [install]: node src/cnoke/cnoke.js -P . -D src/koffi --prebuild.\node_modules\libsignal\node_modules\protobufjs @6.8.8 » [postinstall]: node scripts/postinstallThese dependencies had their lifecycle scripts blocked during install.Do I just yolo this?
Wow so pi doesn't work and now I'm stuck with gorillion of random JS crap because can't unistall it? Who's making this shit?
>>108725553also if it was 10T and is barely more intelligent than a 1T model (Opus) then it would just be utter garbage
>>108726462Brother all we do here is YOLO. For what it's worth though, I did use Codex to build the agent on top of Pi until it was functional enough to upgrade itself. Start there. Feed it both repos, Pi for the platform and OpenClaw for examples of what to do and not do. Tell it what you want in your agent - I prioritized speed and safety.
>>108726722What do you mean build the agent? I though the pi repo has everything?
>>108726739Pi is a framework for agentic AI. You still need to build the agent on top of Pi. The Pi repo has a lot of useful stuff you'll want in your agent, and then pick what you like from OpenClaw and build your perfect claw of your own.
>>108726805Ok, but how do I uninstall this shit? I used bun add -g github:rcarmo/piclaw but there doesn't seem to be any uninstall command
I have an important questionfor autonomous agentic coding (not the model)the toollike Claude Code or CodexWhich one is more powerful? Again, I'm not talking about model. Just the agentic coding tool. I'm going to feed either Claude Code or Codex an APII can't decide between them. I like both. Claud Code seems to use more tokens but seems to be more thorough. Codex seems very token efficient.
>>108726876Delete the directory boyo. You know what a folder is?
>>108726937The harness doesn't matter one bit. The model is the only thing that matters. The only thing the harness changes is the UX for you.
>>108726951I tried very hard to explain I wasn't talking about the model at all...I don't want to use either Anthropic or OpeenAI models. I want to use my own model through the API for these toolsthere are obviously differences between Claude Code and Codex (the tools)
>>108726988Opencode is your answer anon, don't glue yourself to something that may be locked down at any point.
>>108726937I have no idea how you’re gonna hook up either one to a model of your own choosing, but Claude Code is more featureful. I use C-s to save stuff and Codex is kind of gimped by comparison
>>108727076Codex will just let you configure any API you wantAnd Claude Code can be tricked into thinking it's using an Anthropic API, and there's the leaked Claude Code as well.
>>108726988I understood. I was telling you that the tool doesn't matter. Whatever model you use, it will perform more or less the same with whatever harness you use. There isn't much difference. But since most open weights models are more similar to Claude than to GPT which is kind of its own thing, probably Claude Code I suppose.
bruh
>>108727199Is this the "shoeshine boy talking about stocks" moment for AI?
>>108727232No, I'm suprised at "vibecoding" being used in a professional fashion and not just for shitposts and youtube videos
>>108727238bro at this point even tech savvy accountants know about claude code what are you talking about
>>108727238Are you retarded? Google itself uses that term.
Oh my god look how enthusiastic her thinking is. That is FUCKING adorable.
need advice, I'm learning how to program/game dev (gdscript) and I want tutor/mentor AI that I can ask questions, search documentation and explain concepts etc. Installed ollama which model is best for what I'm looking for?
>>108727438Very cute
>>108727453Hnnnnng my heart
>>108720780I checked and it requires a Pro rather than a plus, per Desktop commander. Pro is 100 a month. Fuck that, I'm not dropping that much lel
>>108726448Ah shit what happened, did I not block out my entire address on the package label? Whatever. Robo is here. Now I just need to put away 200 shekels for the RasPi 5 and we're in business. What model should I put in this thing do you think? Kimi? Qwen?
>>108727618This looks complicated. I'm gonna ask my husband to help.
>>108727627
I love vibecoding btw
How do I use Desktop Commander to code? Just set it up to GPT.
I'm not a programmer, and kind of have always been too dumb to learn how. LLMs have been a fun tool because they let me make stuff that I want to use, like a better website for myself or a small game project for fun, without really having to write the code. So I started with Claude, and now I use Codex as my development console. I see a lot of people all the time talk about third-party programs like OpenCode or whatever else is out there, which I know is a non-specific platform that you just import your LLM(s) of choice via API and pay what you use. I get all of that.What I want to know is, outside of actually having the multiple LLMs on call in the program, are any of them actually on par with/better than just using the native app like Codex? If so, in what way?
>>108727762next prompt it with this> 58. Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it.Is there any removable complexity in this project?
> 58. Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it.Is there any removable complexity in this project?
>>108727762>411529 insertions, 1267 deletionsJust like your sister
>>108728258Kek
This isn’t close to the bump limit, but there’s a new thread made by the usual guy, who didn’t fuck up and forget the title like I did:>>108723180>>108723180>>108723180
>>108728419we've been multi-threading for 17 hours
>>108728440The other thread has been around _that_ long? Damn.
>>108720160Is OpenCode better than just the Codex CLI?
>opus got lobotomised somehow, claude code is still unusable>codex's planning mode is a joke, the "plan" it comes up with is just a summary of the conversation so far, lacks a shitton of context and is unusable standalone, so you might as well just chat with it normally without using plan modeThe models are getting noticeably better but the lack of functional planning capability for large changes is actually annoying
>>108728176>58You have 57 more rules like that?
>>108730228>The 'gpt-5.5-pro' model is not supported when using Codex with a ChatGPT account.Never mind lmao, guess I'm stuck with codex either way
>>108730295I have none, but this guy has 120: https://www.cs.yale.edu/homes/perlis-alan/quotes.html