>>108282464
>raw intelligence
there is no such a thing. it's just a next token predictor. it appears intelligent in some situation because it has seen a lot of instruct and reasoning benchmax synthetic data that shows a simulation of a reasoning process about a variety of topics. It's still predicting the thing it saw in that data.
why do you think even the SOTA online API models will still behave like pic related when it sees any sentence related to their benchmax overfit? It doesn't have "intelligence". The entire purpose of a LLM is to take a document in the form of
<|some_magic_tag|>THE_LUSER
HERE'S A LOT OF RETARDED SHIT
<|some_magic_tag|>THE_ASSISTANT
HERE'S HOW I FIX YOUR RETARDED SHIT
STEP 1: KYS
STEP 2: INVENT A TIME MACHINE AND MAKE SURE YOUR MOTHER NEVER MEETS YOUR FATHER
and make document bigger. Until a stop token is predicted. Or get into an infinite loop and never stop until backend either timeouts or runs out of context like all GLM love to do.
MAKE. DOCUMENT. BIGGER.