I've been using LLMs for production code like everyone and i'm noticing things that maybe others don't.1) It's actually a good learning toolIn the same way google was. Being able to google an issue and get working code half of the time was revolutionary vs sifting through paper books.I'm not sure why it's being used to bash together hallucinated code when it can be used to train next gen engineers and "back them" by giving code advice on-demand like a teacher. Making it write code will just make every human worse without fixing hallucinations.2) It's a tool designed to fool you into thinking it's rightHere's how i see LLMs work in coders.-Situation 1:>Order it to write code with medium complexity>It hallucinates nonsense>Correct it by leading it in the right direction>It improves and validates you>Now the user thinks the AI coded something encouraged by the dopamine hit from validation, when in truth the solution has been figured out by the user himself-Situation 2:>Order it to complete boilerplate>It mostly succeeds>The user thinks the AI coded something, when it simply fetched existing code(google-like)-Situation 3:>Order it to write code with high complexity>It produces a failure>Reduce the scope of the code to medium>It's better, but still off>Reduce the scope to small snippets of easy complexity>Now the user thinks the AI is capable of complex code, while in truth the user is doing structural work for the LLMIn short, the LLMs is still just a faster google (which is good), but dangerously imprecise (which is bad) and also tricks the user about its real usefulness (which is very bad imo) because it's trained on giving the user an impression of success.In my case, calculating my time to solve tasks, it helps gives hints on problems (like google), it helps learning (like google), but it fails to write actual useful code.It only creates the impression it is succeeding, at the cognitive cost of the user having to tard-wrangle the AI
>>107939940Daily boomer cope thread
whenever i dont know the specifics of an API i ask gemini with search enabledworks for me
>>107939940Cope. You will be replaced
It's completely untrustworthy for info you need to be 100% sure its correct, so its effectively useless
>>107939997>t. no coder retard>>107939940It's just a fancy autocomplete. The more it autocompletes on its own, the more it tends to fail.
>>107939940>It improves andyou are thoroughly delusional
>>107940091Well, i have completed tasks with it. But as i'm saying, it only does so when you direct the solution for it. I thought of another situation.-Situation 4:>Order it to write code>it produces a failure>ask it to fix the code multiple times>it produces 3-4 failures until it finally lands on something that works>again the user believes the LLM has "solved" the issue, when it has just iterated through google knowledge until something workedThe MAJOR issue is that in all situation the user is tricked by a slightly better google into thinking the AI is solving problems when it's incapable to do so.However it is NOT useless, because it works somewhat better than google as a way to fetch relevant information from context.It's maddening to me, because i'm being asked to use LLMs improperly in production code, when i should be using it at my discretion
>>107939940>using LLMs for production code like everyone
>>107940160It IS a useful tool if used correctlyObviously i'm not making it write the actual code besides boilerplate.I'm having it explain syntax for languages i never tried before in real time, and the auto-complete shows me standard patterns. I'm taking tasks in those languages because i have far less downtime for learning them. Maybe it's me, but it takes me a few days to figure out how something works until i'm confident taking tasks. But with LLMs i learn on the task since it's unlikely i'm going to fuck upIt's also faster to ask it to spot an error than to debug 2 times out of 3.Ask it what's wrong with your code, then move on to the next problem. When you're done you go back to fix without debugging.
>>107939940every new tech takes its casualtieshttps://www.bitchute.com/video/OKW50Sg2XbfJ
>>107940204>It IS a useful tool if used correctlyNobody's using it directly on production, unless they're trying to go bankrupt. A google search is just about as useful.
There is good evidence that the best learning strategies are active recall (like Anki) and doing things yourself.Coding agents don't help with that, reviewing AI code is not a great learning tool.
>>107940339>reviewing AI code is not a great learning toolI don't agree, i learned a lot of ways to solve problems by seeing the AI do it, then doing it myself from scratch. It feels "wrong" because the task has already been solved, but i spent years reinventing wheels, now i can actually improve on existing solutions.Hell, you can make the AI read and explain a medium program structure and make a graph for its structure.Essentially my take is to never make it write actual production code like we're being pushed to (at least i am), but using it like a tool that partitions, explains, analyzes and provides alternatives
>>107940386If you do it from scratch in the end you are probably still learning.
LLM's can help in search, tho the crypto hype people telling me to use AI or be left behind need to delete themselves
>>107939940Yep LLMs are doing exactly what Don Knuth invented which is literate programming. Instead of just immediately writing shitter code you talk through the problem and only through this method do you find the optimal solution or optimal architecture
>>107940149>when it has just iterated through google knowledge until something workedthis is how real live junior devs code lmao
>>107940612That's how i codet. senior
>>107939940This is my experience too. I use it a lot. But if I compare the time it takes to debug and all, it's about the same.If I produce a lot of code that seems to be perfect and needs to debugging, it eventually comes back to bite my ass because there was some shit bug hidden in there.
>>107939940You're right, it would be better if they focused on making google and existing Search Engines better instead of flushing the entire tech sector down the toilet to support this one thing that is only marginally better than google was 6 years ago>Fool you into thinking it's rightYou're half-right, what it is actually doing is fooling CEOs into thinking that it's competent by being trained on the most sociopathic of yes-men. CEOs don't like being told 'No', not for any raeson. They understand that people would rather lie to them than tell them 'no', and they want AI to behave the same way. Thus the hallucinations and the lack of the ability for AI to say 'no'. An AI trained by one of these large companies will never tell its user that an idea it has is bad or that its' rooted in misconception, it will instead lie and hallucinate as it lacks the ability to say 'I don't know' or 'no' to a problem. As this would result in instant firing if an employee did it.
>>107940639Write more tests.
>>107940612A junior dev has accountability. When some junior dev retard iterates through stack exchange until they steal the right piece of code to make a solution work. You can confront that junior dev, ask them to explain how the code works, and discipline them if they are incapable of understanding what the fuck they just put into testing.With an AI, it's always stealing code and its explanations are often times hallucinated. When it makes an error, there is no disciplining it. It will make the same error again and again even if prompted otherwise. The junior dev wins hands-down. I can even elect to reassign or fire the junior dev. I'm not firing Claude or getting answers out of the people who trained it. They likely don't know either all of the information that it has been trained on or which parts of it were from professional codebases and which parts were random shitposts on reddit done to bridge gaps and earn upboats.
>>107940661>write more tests instead of just make better code the first time The only advantage AI has in that field is that it can do its own testing, but if you can't trust the output, and you have to thoroughly scrutinize the input of every test, you're still spending more time debugging than if you just did it yourself using AI as a very advanced API book.
>>107940386You're not supposed to look at the code even Anthropic is like 'just trust the code'.This is the AI work flow where you are now a project manager:>Break up project into incremental deliverables State machines aka microarchitecture work best for this, reminder all UIs are just state machines too>Produce black box>Test the properties using randomized inputs like Quickcheck in Haskell>Never look inside the box!If you need to ever look inside then that's code you should have hand written for security purposes like dbms access or handling money, authentications, important shit that still must be hand coded. Something extremly optimized will need hand coding too like numerical linear algebra but for the most part the AI will work as advertised You can if you want direct it like write the interface and all the type signatures yourself then tell it to write all those methods/functions.
>>107940646One of the biggest problems with AI is the way will always make confident proclamations as though the task and solution are perfectly clear before it shits out complete retardation. They really need to tone down its overconfidence when responding.
>>107940746For example there is a disaster waiting anyone who manually edits AI code because now you have 2 version control repositories the AI code and all the shit you changed. Next time you generate that AI code it could (will) be totally different because it does probabilistic inference and learns all the time so this time next year you get something alien. That's why the claude clunkers are telling everyone just trust the code because it will change even with the same prompts
>>107940786That's exactly what happens in work meetings the chief bluffer dev will be way over confident his idea will work and force it on everyone then it becomes a complex pile of shitcode in reality
>>107939940My recent experience (Cursor max mode, opus 4.5) has been more like>Order it to write code with medium complexity>It starts forming a plan and I can tell that it's not quite on the right track or is overcomplicating something>Press stop, revise the prompt a bit, press plan again>New plan looks good, press build>It mostly succeeds>A few daft mistakes that are easily fixedUsed carefully, it absolutely does write "actual useful code".
>>107941371We have enough evidence that you can't trust the average person to use anything "carefully."
>>107940646I admit i got fooled for a while into thinking it was doing something. Then i realized it was just taking shit that was on google word-by-word and the second things got a bit unusual it would just make a random guess. Then it would correct the guess. Then it would correct the correction into a loop of plausible-looking nonsense.I'm not exceptionally smart or perceptive, but if it fooled me despite my wariness of it, it fools many other shitters every day
>>107943205Most of the time I search anything, I google it and realize it's recycling stackoverflow or top reddit responses
>>107940786I've been trying to get an LLM to make an algorithm that will take an array of map tiles and place square wall.png and floor.png textures in the correct screen positions for a wizardry like game. Despite talking about z sorting, vanishing points etc. it has not a clue that the wall textures won't be rectangular or that you'll be able to see diagonally visible tiles. Terminal wordcel when it comes to this sort of thing.
>>107940746That sounds like a recipe for total disaster. I can't believe they're making black boxes even when i see it happen. I feel like we're months away from something that makes WannaCry look like a kindergarten prank. Ironically LLMs being still kinda shitty is a saving grace, since most people don't trust them. But there's already shit like Ralph making full black box apps while people literally sleep and masturbate... it's totally impossible to make management see the issue too
>>107943219What tricked me is when it was rolling with obscure APIs like nothing. Then it started using made-up methods and i figured out someone had used a vaguely similar implementation elsewhere that was better documented. If google didn't get more shit meanwhile there would be no competition
>>107940050>t. no coder retardThat's right. I also don't know how to make my own shoes either because machines and Southeast Asian children do it for me. Your job is about to be outsourced to a datacenter, so basic it doesn't even require subhumans.
>>107944443what else do Southeast Asian children do for you?
>>107944552the boys or the girls?
>>107944665Can't tell.
Reminder that some of the most successful programmers (Linus Torvalds, antirez, Steve Yegge, DHH, Simon Willison etc) are going all in on agentic and vibe coding
>>107939940You sound like a boomer who still only interfaces with AI through a chat window in a browser tab and copy pastes code back and forth. Sure, that's exactly what you're describing and it worked great from like 2022 to 2024. Unfortunately, it's 2026 now, and everyone is using agentic LLMs that work directly in the editor or in the terminal. Those operate completely differently, their capabilities are way deeper, and you're a Luddite if you aren't at least trying to adapt to that workflow yet.
>>107947379Good point.Auto-complete is even better in fooling you, that's exactly why i don't use it.>use integrated chatbot in VSCode>completes lines for me i had to copy-paste or rewrite before>not really that helpful, but comfy>often gets my intent wrong or makes completely unrelated code>try to stop using it>realize my brain is now dependent on it for tasks i could do before>realize now i can't tell which code i can even write without helpBeginners stuck with this shit are fucked. Just like photoshop artists who cannot sketch on paper at conventions. Problem is, good AI tools will cost more and more going forward. Coders will become serfs sold together with the tools
>>107947579Please stop, man. It's embarrassing. >Integrated chatbot in vscode>autocompleteYou're just revealing that you're using and thinking about the AI tools the same way you did in 2023-24.Get with the times or get left behind.
>>107941371Jesus man, don't you realize that instead of programming your code you're programming the prompt? You're still taking on the actual cognitive burden of coding, but now must go through a middleman. It's exactly what OP is saying. You're not learning what you need and deferring it to the AI, but you're still coding by altering the prompt until the algorithm switches to the right track, usually just iterating from a set of possible options. Instead of becoming a better coder that is highly valued you become an accessory to a proprietary tool that tricks you into thinking IT is coding. That's not a future you want to live in
>>107940786I've been considering making a copy-paste starter prompt that's like:"using less confidence in your answers and not doing x and doing y: < insert actual prompt>" so I don't get constantly pissed off by >Super production safe will work 100% :rocket: :rocket: :rocket: for the 4th time in a row. It annoys me that it doesn't seem to recognize that it's probabilistically reaching for information that likely matches the input statements and just produces complete horseshit with complete confidence, then says I'm completely right when I'm full of shit too.
>>107947601I don't use more recent tools, it's true, but my friends are using Ralph.It just jury-rigs a black box code taken from answers on google, then the user spends the next morning analyzing, refactoring and debugging it.It's still inferior to standardized code and/or modular platforms
>>107947634Oh and i forgot the important part: the actual user is still going through the full cognitive burden of verifying the code, but now with an incentive to cut cornersPeople ARE going to be using it and ARE going to be claiming it's doing useful work, while all code gets shittier and shittier.It's the same corporate progression we've seen all across the board even before AI, and it's why malfunctions will become more and more common unless we invest more and more into tools to deal with the issues we've created, while middlemen make bank.Humans just aren't smart enough to use technology this advanced rationally
>>107940661ThisI embraced TDD with AITest cases should be trivially simple otherwise you'd need to write tests for your tests, but that also means that LLMs can handle writing them perfectly since it's all boilerplateThen when breaking down the task and asking it to write the functions it validates output against the tests
>>107945610> Linus going all in on agentic and vibe coding No. He coded some guitar pedal app with it in python (which he’s not good in) and said something like>it’s neat for little toy projects but it’s got no place in kernelI agree. It’s got for simple stuff you don’t know much about, you might even learn something, but I wouldn’t want anything important vibe coded.
>>107939940In my use case it's only good if I give other's people code and tell it to make something similar or follow the idea.If I ask it to figure it on its own I waste too much time battling with a retarded dragon that's relentless.