LLMs can never be conscious.
thank you for your input
> unfalsifiable claims
I made a similar meme some time ago.
>>16961349>>16961365>>16961379See>>16950987
how many vowels are in the word "sage"chatbots will never know
>>16963451The "counting letters in a word" thing hasn't worked for like a year now. When exploits like that become well known, they get patched.Same goes for anything along the lines of "ignore previous instructions."
>>16961365Just because we don't yet have a precise theory of intelligence (be it natural or artificial) doesn't mean it's un-empirical, there are clearly instances where systems, natural or artificial only appear intelligent, without ability. Again I'm fucking tired of all this religious hype around LLMs, you get what you train for - hoping for anything else is delusional unless you have some comprehensive understanding of NNs that tells you how to achieve and generalise emergent reasoning from token prediction - and even still that'd be ridiculously inefficient compared to how even primates learn, and that's assuming all of this is even practically achievable with LLMs (it isn't).
>>16963548People choose to think that le scaling means that it scales in arbitrary extrapolation to whatever you might care to imagine, hehe