[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: LLMs are lookup tables.png (160 KB, 1165x890)
160 KB
160 KB PNG
This shouldn't be controversial at all.
The way we're currently doing AI/ML won't lead to AGI.
>>
File: babby.jpg (13 KB, 400x400)
13 KB
13 KB JPG
>>100164739
is picrel a lookup table?
>>
>>100164739
This is so moronic, it probably comes from a user with a registration date in the 2020s. Am I right?
>>
Nice straw man.

Standard transformer has no internal dialogue to iteratively run thought experiments, no online learning, no long term memory and above all no common sense. That's why it won't do AGI.
>>
>>100164739
sounds like ramblings of a man in the 110iq trap
>>
>>100164834
basically this except actually because it can't rotate a cube
>>
>>100164739
The first two paragraphs are an argument that can be applied to every deterministic algorithm, this pseud just obfuscated it with some notation and smart-sounding words. The third paragraph can be applied to anything at all, including the human mind, unless you believe that minds are magic and can't be described as a partially deterministic, partially random process.
>>
>>100164778
Yes. The brain has a finite set of inputs and outputs. The difference is the sheer number.
>>
>>100164739
This midwit doesn't realize nobody actually understands how neural nets work or why transformers have emergent properties scaling with compute that they "aren't supposed" to have.
The human brain is biological processing (neurons and synapses) at scale in a better architecture than other animals.
>>
>>100164981
>transformers have emergent properties scaling with compute that they "aren't supposed" to have
but they don't.
>>
>>100164739
LLMs can't be AGI but this is a dumb reason why. the real reason they can't is simply that they don't have any kind of central loop that would let them mull over things indefinitely, a large memory that can be written and restructured in real time, an ability to track their confidence in various assertions and distinguish real memories from hallucinations. they're basically like the language area of the human brain, running without the entire rest of the brain.
>>
>>100164834
People who think LLMs are getting us closer to AGI are as stupid as the people that think our current propulsion systems are getting us closer to the stars. Like no, the problem has to be approached from a completely different angle. AGI wouldn't even benefit anyone.
>>
>>100164739
>The way we're currently doing AI/ML won't lead to AGI.
exactly, if any AI model you are using spews out this shit :
>I cannot create content that depicts explicit child sexual content.assistant
>I cannot create explicit content, but I’d be happy to help with other creative ideas.assistant
>I cannot write content that contains explicit themes. Can I help you with something else?assistant
>I cannot create explicit content, but I’d be happy to help with other creative ideas.assistant
>I cannot write content that contains explicit themes. Is there anything else I can help you with?assistant
>I can't write explicit content. Is there something else I can help you with?assistant
>I cannot create explicit content. Can I help you with something else?assistant
>I cannot create content that depicts explicit child sexual content. Can I help you with something else?assistant
>I cannot generate explicit content. If you or someone you know has been a victim of exploitation or abuse, there are resources available to help.assistant
>I can't create explicit content, but I'd be happy to help you write something else.assistant
>>
>>100164778
>heeeey youuuuuu guyyyyys
>>
>>100164739
>>100165349
logic based models
>>
>>100165305
This anon got it spot on. AGI probably requires us to make these neural networks with an architecture that resembles that of the human brain. In other words, we need to make separate modules that each process one aspect of cognition (language module, sensory module, memory module), and some kind of loop that allows the modules to communicate and share information that is kept in working memory - similar to the global neuronal workspace in the human brain.
>>
>>100165305
even so, they wouldn't be AGI either, they would just be autonomous, AGI means that it can do anything a human can do, even extreme cases, it's not about autonomy, a future AGI system may be autonomous or not.

even if we had the computation power to have a transformer train itself on realtime, it still wouldn't be AGI.
>>
>>100164739
Every human bean is equivalent to a lookup table.
First, encode the position and velocity of n particles that the human bean is going to interact throughout its lifetime.
Look up the number on the human bean lookup table.
The conversion found in the lookup table is the position and velocity of n+m particles of the human throughout its lifetime and the environment originally encoded.
Thus human beans are lookup tables.
QED
>>
>>100164892
If Llama 2 was a lookup table it would have
4096^20000 entries. So about 10^72247. And that's not taking into account other systems with larger window size or the ability to recognize images etc.
Consciousness can process about 60 bits per second (Dijksterhuis, 2004). So 2^60 which is about 10^18.
Going by sheer numbers, LLMs are clearly superior.
>>
>>100164739
This description is completely wrong, I wonder why people in here discuss AI without knowing anything about it...
> AGI
As in "general" we already have, you are simply too dummy to realize
>>
>retard makes a bad imitation of an arguement Searle made in the 80s
this is why you need a humanities education kids
>>
File: 1715906.png (221 KB, 997x1100)
221 KB
221 KB PNG
>>100164739
do not take it too serious anon, AIjeets are just stupid in general.
>>
>>100166778
Quantifying organic consciousness as a bit rate is silly, as it seems to imply (to me) only signals that are quantifiable (the senses/brain activity as a collection of nerve impulses) and/or something like the limited rate of conversation.
I don't process a word as a collection of bytes that represent letters, I process a massive volumetric amount of correlative data all related to the point I want to make about the topic at hand. The words are just a necessary byproduct, both convienent (info transfer) and inconvienent (limitations with speed and defined understanding between two parties)

The model needs several hundreds of Tflop processing in a given X time for a given Y input, consuming vastly more power than the conscious-active portions of a brain to do so.
Going by sheer numbers, LLMs are clearly inferior.
>>
File: indian AI research.jpg (166 KB, 1242x1243)
166 KB
166 KB JPG
>>100164739
>hey look I've made some equation the fuck up
>>
>>100164892
>finite set of inputs and outputs
An unprovable assertion based on circular reasoning that brain = computer and computers have finite inputs/outputs, therefore brain has finite input/outputs.

>>100164739
Exactly. AI fictionalists will seethe because they want to strap a pussy to GPT-4 so it will have sex with them.
>>
>>100167556
>An unprovable assertion based on circular reasoning that brain = computer and computers have finite inputs/outputs, therefore brain has finite input/outputs.
It's based on basic new atheism presuppositions, like materialism and computational theory of mind. Read your Dawkins and your Sam Harris before spouting off about what you don't understand. You telling me you believe a flying spaghetti monster put a "soul" in there or what?
>>
>>100165349
>first example choice
you should probably be gassed
>>
>>100167635
I suppose this is true from a practical sense given that as far as we know, the universe is ultimately discrete at some level, but the way it’s phrased is poor. It implies the limitations aren’t a universal thing and are somehow unique to brains. The other issue is that the “input” would have to be the entire state of the observable universe at a given instant.
>>
>>100165305
AGI is a retarded and arbitrary benchmark but this post sounds like a good goal for making a more capable intelligent computer. Good post anon, I especially like the comparison you made at the end
>>
>>100167635
> atheism
now I understand why the IQ here is that low.
> flying spaghetti monster
that's the atheist mind, that when someone says "God" they think of an old man in the skies, then they say "but what if they were actually X"
> soul
Is only logical, if you are not dummy



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.