Can you make a truly agentic AI that doesn't need babysitting to prevent it from nuking your entire project for no reason?
>>107470068Not before AGI.Else there's always hallucination risk, minimal.
>>107470068Can you have an intern that doesn't need babysitting to prevent it from nuking your entire project for no reason?
>>107470068Not if it is based on current LLMs. I'm not saying another approach wouldn't be better or that LLMs can't be improved in terms of "wrong" output but to make it short no, currently not. Ask again in a few months or years.
>>107470068>hand-drawn art about AIinteresting
>>107470068Breedable designs
>>107470916>human le also bad talking point variationObvious AI spam.
>>107470068Wtf is this retarded korean attempt at anime?
>>107471219>retardedKorean mogged nips even back during their lowest yearshttps://youtu.be/YAyu9_NInDA?si=XpugTBkIiNdqsg_6https://youtu.be/7ANPox8LzLU?si=2r4mEkEme1Ugv1Qf
>>107470068Use case for worrying about agentic AI nuking your entire project? Just have a backup/restore cron job or something.
>>107470068Agentic IDEs make local diffs that you hav eto manually accept. Just review them before accepting the diffs.
>>107470068It's not really an issue if you're running it sandboxed, not in production and do frequent backups. You can add pydantic schemas and unit tests on top of that
>>107470616Not even then. A pettern-matching black box trained on pozzed normie data has no thought process or even minimal understanding of what it's saying or doing. You can't trust it for shit.
>>107470616>muh AGITwo more weeks, right?