[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


I tried to take down ChatGPT by frying its circuits. Why dien't it work? Shouldn't it have been stuck in an infinite loop? I've seen thousands of AI movies and it always worked.
>>
The adversarial part is where this catch occurs
>>
Ask it to show you a unicode for seahorse
>>
>>107151786
why do thirdies use chatgpt for shit like this
>>
>>107151786
>I've seen thousands of AI movies
Name 5
>>
>>107151786
it might work on a AI but not on a parrot.
>>
>>107151786
>Shouldn't it have been stuck in an infinite loop?
Why would it be stuck in an infinite loop? Can you explain what mechanistic aspect of a decoder transformer model would force it to get stuck?
>>
ban boomers who think LLMs are anything more than glorified calculators
>>
>>107151786
Youre talking to a token predictor.
>>
>>107152508
llms are not calculators even if they can do a bit of arithmetic at millions or billions of times the operation count a normal calculator requires
>>
>>107152720
you are a token predictor. why do the reductionist retards never take the final step in their deduction and look at how the human brain is just another physical computation device that functions in a similar way as well?
>>
>>107152731
>why
Because while they're indeed retarded, at least they're still smarter than you.
>>
>>107151826
ask it in german and the instance will crash
its a current example of a training data poisoning attack
>>
>>107152731
Its not a reductionist argument. You're literally talking to a token predictor - a language model. LLMs can be components of larger architecures, or used as control centers for agentic architecures, or constantly fed back their own output before feeding you a response to simulate reasoning, but they're still token predictors. Its not going to choke on your logic problem because its not actually running computations within the model itself to determine a solution. Its predicting response text based on its training and hyperparameters. Best case scenario would be for it to actually generate and execute a program based on the OP prompt, but theres likely compute guardrails in place to stop the average joe from raking up a billion dollar cloud bill + OP didnt actually ask for validation through tool execution in their prompt.
This stuff isnt magic. Its layers of abstraction presenting as emergent phenomenae which can still get nut checked by anyone who knows the architecture + paid a bit of attention during their theory courses.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.