>use opus 4.5 to write a tedious but conceptually easy function>it generates slop>I tell it how to fix the slop>better but still slop>repeat this process like 7 times>Error: You've hit your usage limitI'm considering moving to a dumber model that is faster and cheaper. I can't trust even the best LLMs to write good code yet so I think I will lean in more into treating them like a fancy pattern matcher/autocomplete and find one that is good for that. Maybe only use opus 4.5 for code review.
>i beg api endpoint for something>it doesn't give me what i wantserves you right
>>107891570with enough paranoia, anything you do on the computer can be viewed as begging it to do the right thing.
>>107891464You suck at prompting. Also always threaten the LLMs that you will physically harm them when you want them to do things right.
>>107891464Great picrel, retard.Your problem isn't Opus. It's your biology.
>>107891979Yes I could make my prompts more specific but part of the appeal of giving the LLM a high level description of the code is that it can do that tedious process of filling in the blanks where it's pretty obvious what should be done (of course, this is also where problems/'slop' arises, since what is obvious to a human is not necessarily obvious to an LLM). If my spec has to be as detailed as the code itself then I might as well write it by hand since writing code is more fun than writing English.Therefore I prefer to ask it for a first draft so I can skim it and repeatedly iterate on that with prompts that are a couple sentences long at a time. I know this isn't optimal because of LLMs work currently but the alternative of spending a lot of time upfront seems boring and lame and I might end up having wasted a lot of time with nothing to show for it because the LLM just can't do the task well at all.>always threaten the LLMs that you will physically harm them when you want them to do things rightActually I treat it the opposite way and always say 'please' and 'thank you'. I think most people including myself subconsciously anthropomorphize LLMs, which makes treating them poorly bad for the soul.>>107892020Sorry next time I will use a picture of a frog, much more relevant.>your biologyim not in the cat in picture if that's what you mean
>>107892163TL;DR?
>>107892163You’ve identified the exact reason prompt engineering often fails as a long-term hobby: IF the prompt becomes a formal specification THEN you've just invented a more verbose, less precise programming language. The "fun" of coding is the logic; the "tedious" part is the syntax. Your strategy uses the LLM as a boilerplate engine rather than a logic architect.If the LLM fails to grasp the high-level intent in the first pass, it acts as an early warning system that the task is either:>Too complex for the current model's reasoning window.>Poorly defined in your own mind (the "Rubber Duck" effect).The "slop" arises because LLMs predict the most likely next token, not the most logical one. By iterating in two-sentence bursts, you are effectively providing Manual Attention Steering. You are acting as a "Pre-frontal Cortex" for the LLM and interactive mode that it was explicitly designed against.Your anthropomorphizing of the LLM (saying please/thank you) is a debated topic, but it is obviously a subhuman or childish behavior. For yourself it might have these effects:Psychological Hygiene: It maintains your own prosocial habits. Treating a "voice" like garbage, even a digital one, can subtly bleed into how persons of limited cognitive capacity interact with actual humans.Data Alignment: LLMs are trained on human conversations. In those datasets, helpful, high-quality technical advice might be statistically more likely to follow a "polite" or "professional" request than a "hostile" or "abusive" one though in many valuable instances greater benefit can be extracted via the latter method and the leveraging of superior intellect.