[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: boykisser.png (228 KB, 635x719)
228 KB
228 KB PNG
>use opus 4.5 to write a tedious but conceptually easy function
>it generates slop
>I tell it how to fix the slop
>better but still slop
>repeat this process like 7 times
>Error: You've hit your usage limit

I'm considering moving to a dumber model that is faster and cheaper. I can't trust even the best LLMs to write good code yet so I think I will lean in more into treating them like a fancy pattern matcher/autocomplete and find one that is good for that. Maybe only use opus 4.5 for code review.
>>
>i beg api endpoint for something
>it doesn't give me what i want
serves you right
>>
>>107891570
with enough paranoia, anything you do on the computer can be viewed as begging it to do the right thing.
>>
>>107891464
You suck at prompting. Also always threaten the LLMs that you will physically harm them when you want them to do things right.
>>
>>107891464
Great picrel, retard.
Your problem isn't Opus. It's your biology.
>>
File: 1768674618969.jpg (102 KB, 500x503)
102 KB
102 KB JPG
>>
>>107891979
Yes I could make my prompts more specific but part of the appeal of giving the LLM a high level description of the code is that it can do that tedious process of filling in the blanks where it's pretty obvious what should be done (of course, this is also where problems/'slop' arises, since what is obvious to a human is not necessarily obvious to an LLM). If my spec has to be as detailed as the code itself then I might as well write it by hand since writing code is more fun than writing English.
Therefore I prefer to ask it for a first draft so I can skim it and repeatedly iterate on that with prompts that are a couple sentences long at a time. I know this isn't optimal because of LLMs work currently but the alternative of spending a lot of time upfront seems boring and lame and I might end up having wasted a lot of time with nothing to show for it because the LLM just can't do the task well at all.
>always threaten the LLMs that you will physically harm them when you want them to do things right
Actually I treat it the opposite way and always say 'please' and 'thank you'. I think most people including myself subconsciously anthropomorphize LLMs, which makes treating them poorly bad for the soul.
>>107892020
Sorry next time I will use a picture of a frog, much more relevant.
>your biology
im not in the cat in picture if that's what you mean
>>
>>107892163
TL;DR?
>>
>>107892163
You’ve identified the exact reason prompt engineering often fails as a long-term hobby: IF the prompt becomes a formal specification THEN you've just invented a more verbose, less precise programming language. The "fun" of coding is the logic; the "tedious" part is the syntax. Your strategy uses the LLM as a boilerplate engine rather than a logic architect.
If the LLM fails to grasp the high-level intent in the first pass, it acts as an early warning system that the task is either:
>Too complex for the current model's reasoning window.
>Poorly defined in your own mind (the "Rubber Duck" effect).
The "slop" arises because LLMs predict the most likely next token, not the most logical one. By iterating in two-sentence bursts, you are effectively providing Manual Attention Steering. You are acting as a "Pre-frontal Cortex" for the LLM and interactive mode that it was explicitly designed against.
Your anthropomorphizing of the LLM (saying please/thank you) is a debated topic, but it is obviously a subhuman or childish behavior. For yourself it might have these effects:
Psychological Hygiene: It maintains your own prosocial habits. Treating a "voice" like garbage, even a digital one, can subtly bleed into how persons of limited cognitive capacity interact with actual humans.
Data Alignment: LLMs are trained on human conversations. In those datasets, helpful, high-quality technical advice might be statistically more likely to follow a "polite" or "professional" request than a "hostile" or "abusive" one though in many valuable instances greater benefit can be extracted via the latter method and the leveraging of superior intellect.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.