[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/sci/ - Science & Math

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • Additional supported file types are: PDF
  • Use with [math] tags for inline and [eqn] tags for block equations.
  • Right-click equations to view the source.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: file.png (3.35 MB, 1008x1378)
3.35 MB PNG
LLMs can never be conscious.
>>
thank you for your input
>>
File: ai_midwit2.jpg (89 KB, 886x499)
89 KB JPG
> unfalsifiable claims
>>
File: token.png (1.44 MB, 1317x726)
1.44 MB PNG
I made a similar meme some time ago.
>>
>>16961349
>>16961365
>>16961379
See>>16950987
>>
how many vowels are in the word "sage"
chatbots will never know
>>
>>16963451
The "counting letters in a word" thing hasn't worked for like a year now. When exploits like that become well known, they get patched.
Same goes for anything along the lines of "ignore previous instructions."
>>
>>16961365
Just because we don't yet have a precise theory of intelligence (be it natural or artificial) doesn't mean it's un-empirical, there are clearly instances where systems, natural or artificial only appear intelligent, without ability. Again I'm fucking tired of all this religious hype around LLMs, you get what you train for - hoping for anything else is delusional unless you have some comprehensive understanding of NNs that tells you how to achieve and generalise emergent reasoning from token prediction - and even still that'd be ridiculously inefficient compared to how even primates learn, and that's assuming all of this is even practically achievable with LLMs (it isn't).
>>
>>16963548
People choose to think that le scaling means that it scales in arbitrary extrapolation to whatever you might care to imagine, hehe



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.