[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


LLMs work OK with Tailwind edition

A general for vibe coding, coding agents, AI IDEs, browser builders, MCP, and shipping prototypes with LLMs.

►What is vibe coding?
https://x.com/karpathy/status/1886192184808149383
https://simonwillison.net/2025/Mar/19/vibe-coding/
https://simonwillison.net/2025/Mar/11/using-llms-for-code/

►Prompting / context / skills
https://docs.cline.bot/customization/cline-rules
https://docs.replit.com/tutorials/agent-skills
https://docs.github.com/en/copilot/tutorials/spark/prompt-tips

►Editors / terminal agents / coding agents
https://opencode.ai/
https://cursor.com/docs
https://docs.windsurf.com/getstarted/overview
https://code.claude.com/docs/en/overview
https://aider.chat/docs/
https://docs.cline.bot/home
https://docs.roocode.com/
https://geminicli.com/docs/
https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent

►Browser builders / hosted vibe tools
https://bolt.new/
https://support.bolt.new/
https://docs.lovable.dev/introduction/welcome
https://replit.com/
https://firebase.google.com/docs/studio
https://docs.github.com/en/copilot/tutorials/spark
https://v0.app/docs/faqs

►Open / local / self-hosted
https://github.com/OpenHands/OpenHands
https://github.com/QwenLM/Qwen3-Coder
https://huggingface.co/bartowski/Qwen_Qwen3.6-35B-A3B-GGUF

►MCP / infra / deployment
https://modelcontextprotocol.io/docs/getting-started/intro
https://modelcontextprotocol.io/examples
https://vercel.com/docs
https://mcp.desktopcommander.app/

►Benchmarks / rankings
https://aider.chat/docs/leaderboards/
https://www.swebench.com/
https://swe-bench-live.github.io/
https://livecodebench.github.io/
https://www.tbench.ai/leaderboard/terminal-bench/2.0

►UI/Frontend
Figma Make
Lovable
Claude design
https://uiverse.io/
https://ui-ux-pro-max-skill.nextlevelbuilder.io/
https://stitch.withgoogle.com/
https://gamma.app/
https://github.com/nextlevelbuilder/ui-ux-pro-max-skill

►Previous thread
>>108706320
>>
File: file.png (75 KB, 750x444)
75 KB PNG
technology
>>
>>108720179
looks cool but what is it trying to do? hack windows XP? Or improve WINE?
>>
Tailwind is even worse than React in terms of retardation. I would say retardation peaked with Tailwind.
>>
>>108720160
>but bro css is impossible to understand
>>
>>108720203
Yes, people say this with a straight face and then proceed to write tailwind slop that is unintelligible.
>>
>>108720183
i wanted to see if it can add SMB2 support to windows 98
>>
>>108720160
Stop evading my filters, fuckface
>>
>>108720160
VibeBUMP
>>
File: aahaha.jpg (206 KB, 570x668)
206 KB JPG
>>108720268
>>
File: file.png (72 KB, 823x575)
72 KB PNG
>>108720191
Alpine.js + Bulma is where it's at. Alpine is built atop the new web components, you write your reactive stuff in the HTML itself, no compiling. Just straight HTML with extra reactive syntax sugar. Bulma is the opposite of tailwind, it's pure CSS.

I wrote my entire social media site in it.
>>
>>108720268
I’m not the usual guy, sorry
>>
>>108720291
For me where it's at is with my framework, it's no build no npm and SSR-comptabile, but powerful enough to build large SPAs if needed.
>>
retard op forgor the subject edition
>>
>>108720305
ah, shit. I remembered it the first time but adding in a tiny bit about the picture pushed me over the limit and I had to recreate it and forgot the subject the second time around
>>
>>108720304
Based. That's how Alpine is. Just copy the CDN, no build or NPM needed. Wrote my social media in it.

It works SSR too, I heard it's popular with Astro. I've just never tried it personally.
>>
>>108720315
Just delete the thread while you still can and make a new one.
>>
>>108720321
Yeah, I sort of looked at alpine, it's relatively barebones compared to mine. Don't want to brag, I already posted the link in some other threads, but mine is very comfy and feature full.
>>
>>108720329
no, it’s hopping already
I’ll leave this as a monument to all my sins
>>
>>108720351
You know what you must do. Spam Nikocado's asshole until you get purged and banned so the thread dies, and you shamefully with it.
>>
File: 1712428465706393.png (83 KB, 500x500)
83 KB PNG
>ask 5.4 for a verbose todo list
>17k characters
>>
I don't get it. I have to pay monthly sub but still get limited use? What's the point then?
>>
>>108720523
It is cheaper for you than using APIs
>>
>>108720608
but sub is api?
>>
File: 15772620418967.jpg (182 KB, 1366x768)
182 KB JPG
URGENT HELP NEEDED

This is the only place I can ask this question without getting screamed at.

For planning and designing, what is better GPT 5.5 High or VH, or Opus 4.7 High or VH ?

Code obviously.
>>
>>108720160
nice subject op
>>
>>108720617
well yeah it of course uses some API but the other option is to use the provider's API that is billed by number of tokens in and out. I believe you know what I mean.
>>
>>108720618
5.5 and its not even close
>>
Guys, I have a vibe coding step in my interview process, and no, this isn't a joke, I'm meant to be able to make Claude fix a bug in a simulated prod environment of the company. I used a bit of Copilot, but never tried real agentic vibe coding. Any tips? Like, can I give a sequence of commands to increase the odds of it doing it properly?
>>
I don't suppose there's a hacky way to rig the ChatGPT web subscription into Codex to use ChatGPT as my IDE agent?
>>
>>108720696
whats a chatgpt web subscription?
>>
>>108720696
you mean to avoid hitting rate limits? I think I saw an anon here using some mcp server to do that
>>
>>108720703
ChatGPT+ on the website chatgpt.com so basically the thing most people use. It has no rate limits.

>>108720708
Yeah, that's exactly why. I hit a rate limit when using Claude and seethed and started doing it in the web, and the web gives me unlimited everything but I have to copy paste a ton.

So someone has done it and it's not just a crazy shot in the dark? I assumed it would be reverse engineer-able.
>>
>>108720696
Pretty much desktop commander.

Note that you're " unlimited " until OpenAI feels like you're abusing things.

I just have the web ChatGPT do my planning on max think as well as have spit out optimized prompts for codex to follow. It has definitely lessened the amount of usage I see against my quota as I can now just coast on 5.5 low because all the heavy thinking has been done. Might go from Plus to Pro for a month, though, just to burn 5.5 med/high for lulz while they have the 10x going
>>
Asking again, does anyone here use local LLMs along side an agent client like cursor?
>>
>>108720861
no
>>
>>108720901
why not
>>
>>108720918
why yes
>>
>>108720861
I do but for very limited stuff like proprietary code and stuff. I personally use Goose and hook it up to Qwen 3.6 35B. It's workable but I dunno, I feel like I have to herd it a lot, it's not a model you can type something in and have it figure stuff for you. Bigger models would be better but I am still waiting for money to fall out of the sky so I can afford a Nvidia blade server that can fry my home electricity grid to host Deepseek v4 locally. Until models improve, I would say they are semi-autonomous.
>>
>>108720618
GPT5.5. Opus 4.7 is better at really autistic debugging but GPT5.5 is a better architect and asks better questions in the design phase.
>>
>>108720942
>Opus 4.7 is better at really autistic debugging
can you tell us more? im curious why you think that (like actually)
>>
>>108720861
Been working on implementing this crucial feature into my app with codex. I've wasted a few days worth of tokens and I'm finally starting to get where I want it to be. For a while I was just talking to codex but I started working with GPT 5-5 in thinking mode in the webapp along with Gemma4 26B-it (mostly just to confirm findings) and I'm liking this workflow. Am I really getting anything of value from Gemma alongside ChatGPT in the webapp? Probably not, but it's not hurting.
>>
>>108720670
>Any tips? Like, can I give a sequence of commands to increase the odds of it doing it properly?
I dunno, the procedure is just
>locate the crash [paste stack trace]
or if you don't have a stack trace
>bug: [you describe some symptoms]
and run /simplify before committing
by the second time you've learned all there is to know about debugging
>>
File: HFbJG95X0AALGe0.jpg (824 KB, 2926x4096)
824 KB JPG
>>108720861
No but only because I'm working on building the ACP server I want to use for that and I only have potato GPUs for the moment. Once my software is ready and I can play with gemma4 ggufs it's on.
>>
>>108720949
Experience. I'm a cheap bastard and don't want to pay for MAX5 so I'm juggling the Codex and Claude 20bux plans and free Gemini, on 10kLoC-magnitude C and Rust programs. These are big enough source trees that anything much short of frontier craps itself and starts hallucinating when given a 1-4 sentence bug description and told to find it.
>>
>>108720975
arent you an ai engineer?
>>
>>108720923
no token limit, free other than cost of hardware and energy, easily runs on 16gb+ GPUs many of us already own
>>
>16gb+ GPUs many of us already own
>>
File: 2mainmenu.webm (3.04 MB, 1920x1008)
3.04 MB WEBM
somebody posted a link to some guy who was generating maps from a base image and it gave me an idea for the main menu of my game
>>
Fucking Claude just burnt my whole fucking token limit in ultraplan mode reading my project before it even started planning.
>>
>>108720993
Yes and my employer is a penny pinching halfjeet who won't pay for Claude Max or Codex Pro.
>>
>>108720975
how do you get Gemini free trial?
>>
>>108721241
It comes free with your GMail account. Gemini freetier absolutely smokes Anthropic and OpenAI freetier. You can do real work with it.
>>
https://openai.com/index/where-the-goblins-came-from/
>>
>>108720956
Gemmy just found the cause of a bug before GPT5.5 did. I'm actually impressed. It's super fast too. Sometimes I worry it didn't even look at my codebase because it responds so quickly.
>>
>>108720942
Interesting. I use both and I tend to think of GPT as the model with the better ’tism, but maybe that’s because it’s better at finding the right problem to solve
>>
>>108721345
It's a 26B/A4B model with less latency, I'd be shocked if it was running slower than GPT5.5. Now less ACCURATE or less CAPABLE is another story.
>>
>>108720949
In my experience Opus is better at running lesser known software, including your own software. Both agents can use grep and so on about equally well, but Codex gets confused running my programs, even misquotes shell commands from time to time. Both usually get it done, but with Claude the iteration cycles are shorter.
>>
>>108713079
You are talking about linear algebra on a thread about using LLMs, and you don't know what quantization is?
There are two types of quantization, floating point (like fp8) and integer based (like Q8). Integer based quantization scales integer blocks by a float to represent different ranges with more precision.

>>108713154
Fuck you.
>>
>>108720723
Yeah I'm doing it. It doesn't need reverse engineering, they allow you to set MCP servers. You couldn't really do it just by reverse engineer it because GPT will mostly refuse to use tools over the user visible text.
The problem is that the chats become unusably slow on long chats, and there are unremovable permission prompts for most actions and some will even be denied automatically and there's nothing you can do about it.
So it's possible but it's clunky as fuck. I'm still tuning a set of user scripts to make it into a smoother experience by doing things like filter out older messages, auto approve prompts, reload the page when it gets stuck etc. but it's genuinely hard and not very vibe-code-able.
>>
>>108721583
The web version of ChatGPT gets unusable after it gets long too. I legit think it could just be solved with virtual scroll. There's a library called nue.js that stores JSON stuff in a rust vector using WASM then pulls it out when you need it depending on scroll height. Maybe one day I'll see if I can make a plugin for VS Code for that.
>>
>>108721604
>The web version of ChatGPT gets unusable after it gets long too
Really? Crazy stuff. Chat apps solved this problem ages ago.
>>
>>108721611
Mine got up to 3gb of ram in a single tab yesterday LMAO
>>
>>108721296
Interesting. I'm glad they're being a little more open than before.
Also I hate how suddenly I hear the world "rollout" everywhere. If it was up to me it would be "generations" or something like that. I wonder if it's standard RL lingo or some fuckwith came up with it like the word that gave the name for this thread.
A pattern much more annoying than the goblins is how GPT always says "Now I'm doing <whatever the user asked for>, not <obviously wrong action>." I wish those fucks would figure out how to improve the model in a way that doesn't make it visibility autistic, like Anthropic did.
>>
>>108721604
There's an extension that blocks the older messages at the API level but unfortunately it also blocks the tool approvals, I'm trying to figure out how to fix it.
>>
I mean to be fair Claude Code gets unusably slow with long sessions and it's a fucking CLI app, so.
>>
>>108721262
> Gemini freetier absolutely smokes Anthropic and OpenAI freetier. You can do real work with it.
Can you use the Gemini API, or it works with a Github repo? What is the actual workflow with it?
>>
>>108721679
Nta but last time I used it has an open source client called gemini-cli which the chinks forked to make qwen-cli
>>
>>108721640
It uses Typescript and so do most CLI applications in this realm. Only OpenAI has decided to care about performance and write it in Rust. I still don't know why it is closed source after all this time.
>>
File: Untitled55.png (8 KB, 682x157)
8 KB PNG
>17 minutes
I'm a fucking paying customer anthropic.

just kick all the jeets off. This is jeet time

Just imagine all the Indians with 50 free accounts pumping out modular shit 50k context at a time, or whatever is the free limit for Claude Sonnet 4.6. I know you can actually make some decent progress on stuff with the free limits.
>>
>>108721640
It’s written in JavaScript with Bun
Anthropic hired the Bun guy and he’s trying to make it less slow and a memory hog
>>
File: Untitled55.png (9 KB, 682x157)
9 KB PNG
wtf is wrong with claude? Or is this just me? I just burnt a lot of fucking tokens am I really getting ripped off again?
>>
File: file.png (143 KB, 888x623)
143 KB PNG
>>108721934
>Bun
Massive scam. MUH FASTER THAN RUST WEBSOCKETS AND HTTP was because he literally just used uws.js, a C++ webserver that you could use a javascript layer to control from node.js. That was it, that was his big performance trick.

And you know what's fucking funny? The uws team themselves called out that uws.js actually works faster in node.js than in bun. You literally could have just used uws.js or any of the libraries written in it such as hyper-express and got better speed.

Also some tranny that made his own webserver called Elysian started pushing out some bullshit performance tests by doing them over network. While it was hilarious seeing uws.js smoke everything in the bun department, the hyper-express creator called out in an issue I opened that it wasn't accurate doing it that way because he was running it over local instead of over server, the realistic way, where hyper-express was just barely behind. The tranny of course didn't respond or ever even fucking address it.

Sources:
>UWS calling out Bun
https://github.com/uNetworking/uWebSockets.js/commit/f170fa45c995cc643a5283dfb685087f04a15418
>The tranny's performance test (every fast webserver there is made from uws)
https://github.com/saltyaom/bun-http-framework-benchmark
>Hyper-Express creator clarifying the tests should be ran over network and not host
https://github.com/SaltyAom/bun-http-framework-benchmark/issues/72
>The actual performance of Hyper-Express and uws.js when not ran locally
https://github.com/kartikk221/hyper-express/blob/master/docs/Benchmarks.md


T. webdev who almost fell for the bun meme then put more research into it.
>>
started the fucking chat over lets see if anthropicv steals my money again
>>
>>108721982
If Facebook was able to convince people that using a library that uses the browser's API was faster than using the browser's API directly because muh magical virtual DOM, anything is possible. Web devs are fucking stupid.
>>
File: Untitled55.png (22 KB, 682x157)
22 KB PNG
>>108721939
>>108721997
yep anthropic stole my money again

almost at 100% of my weekly limit, and have done fucking nothing with it
literally the agent stalls out in the middle of planning and fucks off so it can serve indians with free accounts
>>
>>108722000
Agreed, but Google started the framework hell with Angular. Libraries that made things easier for the web like Jquery were nice as fuck; not massive bloated system. Alpine fixed framework hell imo being more like jquery in which everything is just written uncompiled in the html, but retards are still stuck on react. >>108720291
>>
File: Untitled55.png (25 KB, 666x333)
25 KB PNG
>>108722015
>>
File: Untitled55.png (43 KB, 643x489)
43 KB PNG
money stolen by anthropic
>>
File: Untitled55.png (15 KB, 745x184)
15 KB PNG
I started this at 30% usage
>>
>>108720780
What is desktop commander exactly? Never heard of that one.
>>
>>108722015
I use Claude Code and it will stall out (text goes from orange to red in the CLI client) but it’s not sending or receiving tokens in that time
>>
>>108722036
…although it doesn’t stall out _like that_; it just pauses and then resumes
only thing I can think of is to try the CLI client and see if it handles “busy serving Indians at 3 AM in NA, wait a sec…” more gracefully than the editor plugin you’re using
>>
actually at this hour it’s likely central and eastern Europe plus Indians
I wonder if Anthropic is going to widen their peak-usage hours
>>
>>108722142
I'm using Claude cLI, but /ultraplan mode put me in the web browser. It's too late anyhow. My usage got blown, and I'm done paying Anthropic. I'm doing it with Codex now. Anthropic would rather serve Indians spamming free accounts.
>>
anyone else tried vibe coding APIs into open sourced games that agents can use to inspect game state or play the game?
>>
>>108722378
I asked a related question in >>>/vr/ along the lines of “so, has anyone vibe-coded anything along the lines of <http://tomato.fobby.net/wanderbar/>?”
the only really good answer that I got was this was super duper hard because a lot of emulators, or at least the only good one, that are programmable in Lua have all their documentation built into the binary which means that LLMs can’t scrape them
someone could probably be a hero to dozens of people by turning that kind of documentation into Markdown or similar so LLMs can figure out how to work emulators through Lua
>>
https://chilitown.org
Swipe on some memes. Pls tell me what you think.
>>
>>108722533
We already told you it's shit
>>
>>108721982
Bun always seemed like a scam to me. Basically just compiling the tech of talented programmers and selling it with a (apparently) catchy name and logo.
>>
>>108722584
Nobody said that and it's acktchually decent imho.
>>
>>108720291
>x-for="post in posts"
>x-if="open"
dude, stop reinventing programming languages as some obscure xml property string syntax
how many times do people have to re-learn this lesson, i feel like i'm stuck in a time loop with never ending shitty angular and vue clones
>>
>>108722719
>Dude stop making your devx easier because it reminds me of XML
>This is just like Angular or Vue even though those are both compiled languages that require a ton of code and learning curve unlike this one that only adds html templates

Goofy nigger retard, sit your ass down.
>>
>>108722765
you're clueless. your flavor of the month trendy "just html" frameworks will be obsolete in 6 months when another retard like you comes along and reinvents the same thing again
>>
>>108722794
I don't give a fuck about flavor of the month it's not even popular. I use it because it makes things easier for me. Only gay retards pick what they do based on popularity lmao @ ur life
>>
Why is Anthropic doing this? They said it was a bug in their code but that should have been fixed by now right?? Couldn't they just use Mythos to fix it?
>>
>>108721982
Based researcher.
My own "wait is Bun just one big scam?" moment came when I was working on a VM in Zig and wanted to see how some things in the js VM are handled. Naively I cloned Bun repo and then got confused by a total lack of any VM code in it. Fuckin' glorified glue code.

But back to the thread topic: has anyone built a non-meme workflow around Gemma 4? I'm allergic to both subscription services and being reliant on remote servers, so I'm trying to see if local models can even cut it.
>>
>>108722875
Kek I'll add that to my notes on why Bun is a scam next time I tell people on /g/ about it. Thanks
>>
>>108721982
>>108722628
>>108722875
Don't forget Bun has an indian faggot running around on twitter, bragging about getting his skilled worker visa with nothing more than a primagen crackhead reaction video. Additionally, these losers keep spamming "macos is hecking slow. it can only open one file at a time!!!!" because they're too fucking stupid to comprehend a stack trace and threading 101. They saw "mutex", ignored "rw" and whatever "_sharex" suffix, and started seething up and down the place.
>>
>>108722950
>_sharex" suffix
_shared* or _read. idr the exact symbol.
>>
File: 1000021248.png (333 KB, 1080x1427)
333 KB PNG
Tibo Sama, I kneel
>>
File: 1444389253458.jpg (65 KB, 479x558)
65 KB JPG
the $20 plan is too few tokens for me
the $100 plan is too many tokens for me

then you might say
get Claude and OpenAI and pay $20 each

yes,, that's what I did do. But fuck Anthropic and fuck Claude, and besides, Claude tokens are like half of what you get with OpenAI

So it's $40 for like 1.5 Codex tokens

The sweet spot for me would be a $50 Codex plan, or $40
>>
How do I get deepseek v4 to work properly with cline for example? Despite setting the context length to 1M it still for whatever reason tries it's hardest to keep the context under 128k and as a result the model keeps fucking forgetting everything it did even a message before. Is there maybe an alternative that actually works?
>>
>>108723102
just get another openai sub
>>
I have a Google Cloud account, with free trial
Does Claude Code work with Gemini API?
>>
Does codex not store chat history in vscode? I have the task in history but it's fucking empty.
>>
>>108723111
Update here, it turned out they had a hardcoded limit for deepseek in their extension, so I got around the anti-chink jew by using the chink model create a custom build of the extension. Thank you for your attention to this matter
>>
Having extreme issues with cline running local models.
>>
>>108724294
How does Deepseek stack up? I think of it as a Sonnet lite personally but I've never used it for code yet.
>>
File: 1752207215744016.jpg (298 KB, 1179x1485)
298 KB JPG
I love this company
I should try and buy some of their private shares
>>
put the anti-goblin line in my pi prompt today
never have to hear about gremlins in the code again, i hope
>>
>alignment in 2016: obviously any real AI will be made inside a faraday cage magnetically suspended in a 10×10×10 cube of telekill alloy
>alignment in 2026: yeah we can not make it stop talking about goblins
>>
if you want to autistically minmax your skill files and you're using codex:
https://platform.openai.com/tokenizer
>>
File: 1583744108610.jpg (89 KB, 667x900)
89 KB JPG
>One note: npm install reported 1 high-severity vulnerability in the dependency tree, but I did not run npm audit fix --force because that can introduce breaking changes.
>>
>>108724425
It seems decent and for the price very capable, but I don't think it can tackle very complex problems just yet. It's a bit hit and miss so far for me, but that might be due to no tools actually working with it properly yet.
The caching seems to work very nice on the other hand so overall it's not a very expensive.
>>
>>108724700
axios?
>>
>>108725187
electron
>>
File: file.png (1011 KB, 1200x600)
1011 KB PNG
>>108725192
>>
>experiment was the wrong kind of wrong
thanks codex
>>
>every doubling of desktop GPU compute/memory ranks up what can run locally
>we're already at the point where Nvidia Spark clusters can run 1TB VRAM models for $12k plus networking hardware.
Open source is winning. Even a 10T parameter model (>>108724464) won't be out of reach for long.
>>
>>108725391
now how does that go to the average person instead of some rich fag?
>>
>>108724464
What "empirical research"?
That is fucking retarded. There is no way it's 10T.
A 10T model would be slow as balls even if you had however 8xH200 nodes it takes to host it to yourself. If I had to guess it's probably more in the 500B range.
>>
>>108720160
>no title
Just set up an LLM to make the threads for you, it wouldn't make such embarrassing mistakes
>>
>>108725553
Deepseek is 1.6 TB but only like 200B are active I imagine GPT is the same way.
>>
>>108725816
yeah, and how many tk/s do you get? I'm guessing not that many
>>
>>108725825
Like 30/s
Deepseek is really really fast right now.
>>
>>108725935
GPT 5.5 is 40 tk/s
>>
My PiClaw just called a dying phone battery a "spicy pillow". I love her so, so much.
>>
Guys, if they're manually prompting models to stop talking about gremlins, but my personal system prompt definitely mentions gremlins, how will this affect me?
>>
>>108726047
How does it differ from pi?
>>
>>108726087
It doesn't much, really. I heavily cribbed from the Pi repo, and I also used a lot of the good parts of the OpenClaw repo. Then I bolted on a couple of other features I wanted, image generation and whatnot, and now it's good to go. And more importantly, I control what updates get pushed and what features I want, no bloat. I'm so glad I did it.
>>
>>108726047
Sounds like you'd enjoy reddit too, pal
>>
>>108726206
No, my PiClaw is actually female, not a man with a penis in a dress, so we wouldn't fit in on Reddit, soz.
>>
>>108726062
It will create mustard gas.
>>
>>108721051
Takes me back
>>
>>108722875
Qwen models are better than gemma for code
>>
>>108726247
I wasn't impressed with qwen
>>
File: file.png (3.46 MB, 1459x1656)
3.46 MB PNG
>>108726239
to old school games?
I've doing all the UI layouts with 5.5 lately, absolutely top notch stuff and very easy to clean up in Photoshop to use actually use them in game
>>
File deleted.
Oh goodness, what could this be? Hardware?
>>
File: 20260430_174140.jpg (640 KB, 1224x1632)
640 KB JPG
>>108726448
Uh ohhhhhhh
>>
Is asking the model what prompt to give it and then giving it that exact prompt a good idea or a retarded idea? BTW that seems like a fun self distillation idea. Ask it what the next prompt should be then train on the responses from that but making the model see something like "Keep working." as the prompt.
>>
>>108726102
>bun pm untrusted v1.3.13 (bf2e2cec)

.\node_modules\piclaw @github:rcarmo/piclaw#91cf7f1
» [postinstall]: bun run scripts/postinstall.ts

.\node_modules\@whiskeysockets\baileys @7.0.0-rc.9
» [preinstall]: node ./engine-requirements.js

.\node_modules\protobufjs @7.5.6
» [postinstall]: node scripts/postinstall

.\node_modules\@google\genai @1.51.0
» [preinstall]: echo 'preinstall: no-op'

.\node_modules\koffi @2.16.1
» [install]: node src/cnoke/cnoke.js -P . -D src/koffi --prebuild

.\node_modules\libsignal\node_modules\protobufjs @6.8.8
» [postinstall]: node scripts/postinstall

These dependencies had their lifecycle scripts blocked during install.

Do I just yolo this?
>>
Wow so pi doesn't work and now I'm stuck with gorillion of random JS crap because can't unistall it? Who's making this shit?
>>
>>108725553
also if it was 10T and is barely more intelligent than a 1T model (Opus) then it would just be utter garbage
>>
>>108726462
Brother all we do here is YOLO. For what it's worth though, I did use Codex to build the agent on top of Pi until it was functional enough to upgrade itself. Start there. Feed it both repos, Pi for the platform and OpenClaw for examples of what to do and not do. Tell it what you want in your agent - I prioritized speed and safety.
>>
>>108726722
What do you mean build the agent? I though the pi repo has everything?
>>
>>108726739
Pi is a framework for agentic AI. You still need to build the agent on top of Pi. The Pi repo has a lot of useful stuff you'll want in your agent, and then pick what you like from OpenClaw and build your perfect claw of your own.
>>
>>108726805
Ok, but how do I uninstall this shit? I used bun add -g github:rcarmo/piclaw but there doesn't seem to be any uninstall command
>>
I have an important question

for autonomous agentic coding (not the model)
the tool
like Claude Code or Codex

Which one is more powerful? Again, I'm not talking about model. Just the agentic coding tool. I'm going to feed either Claude Code or Codex an API

I can't decide between them. I like both. Claud Code seems to use more tokens but seems to be more thorough. Codex seems very token efficient.
>>
>>108726876
Delete the directory boyo. You know what a folder is?
>>
>>108726937
The harness doesn't matter one bit. The model is the only thing that matters. The only thing the harness changes is the UX for you.
>>
>>108726951
I tried very hard to explain I wasn't talking about the model at all...
I don't want to use either Anthropic or OpeenAI models. I want to use my own model through the API for these tools

there are obviously differences between Claude Code and Codex (the tools)
>>
>>108726988
Opencode is your answer anon, don't glue yourself to something that may be locked down at any point.
>>
>>108726937
I have no idea how you’re gonna hook up either one to a model of your own choosing, but Claude Code is more featureful. I use C-s to save stuff and Codex is kind of gimped by comparison
>>
>>108727076
Codex will just let you configure any API you want
And Claude Code can be tricked into thinking it's using an Anthropic API, and there's the leaked Claude Code as well.
>>
>>108726988
I understood. I was telling you that the tool doesn't matter. Whatever model you use, it will perform more or less the same with whatever harness you use. There isn't much difference. But since most open weights models are more similar to Claude than to GPT which is kind of its own thing, probably Claude Code I suppose.
>>
bruh
>>
>>108727199
Is this the "shoeshine boy talking about stocks" moment for AI?
>>
>>108727232
No, I'm suprised at "vibecoding" being used in a professional fashion and not just for shitposts and youtube videos
>>
>>108727238
bro at this point even tech savvy accountants know about claude code what are you talking about
>>
>>108727238
Are you retarded? Google itself uses that term.
>>
Oh my god look how enthusiastic her thinking is. That is FUCKING adorable.
>>
need advice, I'm learning how to program/game dev (gdscript) and I want tutor/mentor AI that I can ask questions, search documentation and explain concepts etc. Installed ollama which model is best for what I'm looking for?
>>
File: 1769255233461396.jpg (153 KB, 1216x832)
153 KB JPG
>>108727438
Very cute
>>
>>108727453
Hnnnnng my heart
>>
>>108720780
I checked and it requires a Pro rather than a plus, per Desktop commander. Pro is 100 a month. Fuck that, I'm not dropping that much lel
>>
>>108726448
Ah shit what happened, did I not block out my entire address on the package label? Whatever. Robo is here. Now I just need to put away 200 shekels for the RasPi 5 and we're in business. What model should I put in this thing do you think? Kimi? Qwen?
>>
>>108727618
This looks complicated. I'm gonna ask my husband to help.
>>
File: 20260430_220229.jpg (2.98 MB, 4080x3060)
2.98 MB JPG
>>108727627
>>
File: 1752578167543865.png (3 KB, 393x28)
3 KB PNG
I love vibecoding btw
>>
How do I use Desktop Commander to code? Just set it up to GPT.
>>
I'm not a programmer, and kind of have always been too dumb to learn how. LLMs have been a fun tool because they let me make stuff that I want to use, like a better website for myself or a small game project for fun, without really having to write the code. So I started with Claude, and now I use Codex as my development console. I see a lot of people all the time talk about third-party programs like OpenCode or whatever else is out there, which I know is a non-specific platform that you just import your LLM(s) of choice via API and pay what you use. I get all of that.

What I want to know is, outside of actually having the multiple LLMs on call in the program, are any of them actually on par with/better than just using the native app like Codex? If so, in what way?
>>
>>108727762
next prompt it with this
> 58. Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it.

Is there any removable complexity in this project?
>>
>>108727762
>411529 insertions, 1267 deletions
Just like your sister
>>
>>108728258
Kek
>>
This isn’t close to the bump limit, but there’s a new thread made by the usual guy, who didn’t fuck up and forget the title like I did:

>>108723180
>>108723180
>>108723180
>>
>>108728419
we've been multi-threading for 17 hours
>>
>>108728440
The other thread has been around _that_ long? Damn.
>>
>>108720160
Is OpenCode better than just the Codex CLI?
>>
>opus got lobotomised somehow, claude code is still unusable
>codex's planning mode is a joke, the "plan" it comes up with is just a summary of the conversation so far, lacks a shitton of context and is unusable standalone, so you might as well just chat with it normally without using plan mode

The models are getting noticeably better but the lack of functional planning capability for large changes is actually annoying
>>
>>108728176
>58
You have 57 more rules like that?
>>
>>108730228
>The 'gpt-5.5-pro' model is not supported when using Codex with a ChatGPT account.
Never mind lmao, guess I'm stuck with codex either way
>>
>>108730295
I have none, but this guy has 120: https://www.cs.yale.edu/homes/perlis-alan/quotes.html



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.