A general for vibe coding, coding agents, AI IDEs, browser builders, MCP, and shipping prototypes with LLMs.►What is vibe coding?https://x.com/karpathy/status/1886192184808149383https://simonwillison.net/2025/Mar/19/vibe-coding/https://simonwillison.net/2025/Mar/11/using-llms-for-code/►Prompting / context / skillshttps://docs.cline.bot/customization/cline-ruleshttps://docs.replit.com/tutorials/agent-skillshttps://docs.github.com/en/copilot/tutorials/spark/prompt-tips►Editors / terminal agents / coding agentshttps://cursor.com/docshttps://docs.windsurf.com/getstarted/overviewhttps://code.claude.com/docs/en/overviewhttps://aider.chat/docs/https://docs.cline.bot/homehttps://docs.roocode.com/https://geminicli.com/docs/https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent►Browser builders / hosted vibe toolshttps://bolt.new/https://support.bolt.new/https://docs.lovable.dev/introduction/welcomehttps://replit.com/https://firebase.google.com/docs/studiohttps://docs.github.com/en/copilot/tutorials/sparkhttps://v0.app/docs/faqs►Open / local / self-hostedhttps://github.com/OpenHands/OpenHandshttps://github.com/QwenLM/qwen-codehttps://github.com/QwenLM/Qwen3-Coder►MCP / infra / deploymenthttps://modelcontextprotocol.io/docs/getting-started/introhttps://modelcontextprotocol.io/exampleshttps://vercel.com/docs►Benchmarks / rankingshttps://aider.chat/docs/leaderboards/https://www.swebench.com/https://swe-bench-live.github.io/https://livecodebench.github.io/https://livecodebench.github.io/gso.htmlhttps://www.tbench.ai/leaderboard/terminal-bench/2.0https://openrouter.ai/rankingshttps://openrouter.ai/collections/programming►Previous thread>>108504430
so is gemma 4 better than gpt 5.3?
I use AI and I still look like the guy on the left.
>>108526147proof?
>>108525828You probably can if you're smart enough, but making a virus that kills humans extremely efficiently is probably orders of magnitude easier than making a virus that makes CPUs.
>>108526069ya, if not then skill issue
>>108526043I sell blood to hospital from time to time. Also grandparents gift me money on birthdays.
i wonder how the models got rlvr'ed into writing monolithic files
Is there any good reason to use claude code over claud desktop with mcp servers?
>>108526499good question>>108526532dunno never used claude desktop but probably not much of a difference
https://github.com/anomalyco/opencode/Out of curiosity--and perhaps ignorance--why don't people use OpenCode? I've been using it with omnicoder:9b for planning and Qwen3 Coder A3B for coding. I've tried some others like aider, cursor, codex, and claude code and I've felt I've gotten the best experience from OpenCode--well minus their absolute shit need to manually configure models.
>>108526968i was going to say plenty of mentions of opencode in threads + it's in the op, but for some reason it isn't in the op. lol codex isn't in the op either and it's the most popular thing in the threads.
They're not quite Blender-level planet surfaces but it generates near instantly so I'm happy with it for now. Gonna use these visual profiles to generate simple 2D icons to replace those shitty placeholder spheres I've had since the beginning. And find a way to render the spheres without turning my phone into a nuclear reactor.
>>108526968I'm using it. The harness seems really good but the desktop version is still in beta and it's annoying to do some things that aren't fully implemented or documented yet, and it seems really dumb that you can't edit files in the viewer if you want, maybe that's planned.
They cut off claude for openclaw.You can't use openclaw with it anymore what do we do bros?
it's very tempting to just spam "what are the next steps" and "continue on next steps" without looking at what its actually doing at this pointthey love to add lines of code, i think i'll need to refactor soon and strip out the parts that don't add much value
>>108527052>>108527371That's awesome to hear! What are y'all using for backends? I'm still on Ollama but I want to give vLLM a try for the Ray support--wanna give a shot at splitting the kv cache onto a different machine's GPU.
>>108527502They're mostly doing this so people move to Cowork right?
>>108527502Pay what you owe or be left behind.
>>108527547i'm an api fag, paying the altman toll
>>108527550Should I move on to cowork because it's better than open source stuff?
>>108527547Not much ollama I can use with a 1050 ti
>>108527547glm-5 on nvidia nim, slow but surprisingly capableit has some kind of whitespace compression issue that makes python unworkable but it's fine in languages where it only needs to get the brackets in the right place and doesn't have to fight with indentation levels
>>108527550They want vertical integration. They want whatever moat they can get from people being locked in to their proprietary client side software.The reason of "well umm they put too much load on our servers" might be a distant secondary reason but the primary reason is 100% that they want lock-in. They banned opencode as well and there is no way the tiny % of users using opencode are putting that much load on their servers from slightly worse caching or using opus instead of sonnet for agents.
>>108527563>>108527583Fair enough, we use what's best. However I will ask why not try to save ~$200 and get an RTX 3060 12GB? It is/was a budget sweet spot for me personally. Threw them in an old HP Z620--def not the fastest CPU again--budget.>>108527598Any advantages of nim? I don't know much about it, it gave me a more "professional" vibe than I was looking for when I started with Ollama. I suspect it's capable of some pretty nifty stuff with it being from nvidia 'nd all. GLM-5 is still a bit much for my setup, can you "lazy prompt" it: "the app needs to be more efficient, make a plan", "execute your plan and suggest next steps"?
>>108527700tb h i do want to run local eventually, but i'd rather wait a year or two before building a machine for it.hopefully memory crunch won't be so fucked then and the local models will offer better performance than today's frontier ones. plus i'm only doing like $20 a month rn (might go up to 40) so it's not crazy.