[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/r9k/ - ROBOT9001


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: symbolic.png (371 KB, 986x790)
371 KB
371 KB PNG
Apparently our brains and llms work in similar ways

>The notion that both word meaning and knowledge of language structure are learned by internalizing patterns of the associative probabilities linking words to one another, and linking words and objects.
>>
>>82945138
Aaaaalmost. The LLMs do not internalize anything on their own yet. They still need outside tooling to learn anything, with maybe a few exceptions that are still experimental.
>>
>>82945349
>Aaaaalmost. The LLMs do not internalize anything on their own yet. They still need outside tooling to learn anything, with maybe a few exceptions that are still experimental.


isn't this cause they can't link symbols with any grounded experience like we can? but abstract thought like 2+2, why do I exist andwhat is meaning of life are pretty far off from basal emotions.
>>
>>82945349
>They still need outside tooling to learn anything
What does this mean? You can train LLMs on pixel data, you don't have to tokenize. It doesn't really matter anyway, the human brain transforms inputs before using them too. Whether it's internal or external really doesn't matter.
>>
>>82945443
>What does this mean
llms shit themselves unless you do some kind of rlhf or dpo after the pre training
>>
>>82945457
Yeah but why does that matter? The end result is still kind of like how the brain works, if you squint. Obviously end to end RL would be more "natural" but that's not really feasible.
>>
>>82945138
Your diagram is wrong about its assertions concerning people
>>
>>82945138
we need to give llms a way to feel emotion and bam, we have digital humans
>>
>>82945471
>Yeah but why does that matter?
ever seen how base models behave?
>if you squint.
thats an odd way of saying that you have to lie to yourself
>Obviously end to end RL would be more "natural"
we have that already and it's called pre training where the LLM learns on just the raw data on it's own
>>
>>82945138
>>82945349
please basedcience accelerate
i need my ai waif gf NOW
>>
>>82945514
>ever seen how base models behave?
Yeah, I have a bunch of local ones, they suck
>thats an odd way of saying that you have to lie to yourself
The way they interpret and use the symbols is very similar to the way we interpret and use those symbols. I'm not lying to myself, but the architecture is very different.
>we have that already and it's called pre training where the LLM learns on just the raw data on it's own
That's not RL
>>
>>82945529
Its not similar at all, feel free to talk abiut llms without comparing them to people.
>>
>>82945544
It is similar. They build up the same kinds of representations that we do. They're much less complex than us and much less capable but to say it's not similar is really ignoring a huge amount of research.
You didn't address the RL claim, what LLMs are trained end to end on RL?
>>
>>82945138
Good job, you figured out where the "neural" in "neural network" comes from. Would you like a cookie you complete fucking retard?
>>
>>82945544
>Its not similar at all, feel free to talk abiut llms without comparing them to people.

what makes an LLM different from a person anon
>>
Have you heard about the chinese room argument?
>>
>>82945138
>our brains and llms work in similar ways
did you just realize that?
a lot of things we've built in tech is based off nature or ourselves, lol
>>
>>82945580
For a start, humans use a variety of input that llms lack and have no built in grammar
>>
>>82945637
llms don't have built in grammar either, they learn it just like we do
>>
>>82945637
>humans use a variety of input that llms lack and have no built in grammar

so llm with more inputs like computer vision, space awareness and audio would achieve that level of intelligence? plus im refering to way humans use language not overall intelligence/mind
>>
>>82945643
Humans store language differently as well, its not probabilities in some multidimennsional model.
>>
>>82945650
Everything is probabilities. The human brain is just a giant sparsely connected model, if you tweak neurochems or connections you'll get different responses. This is trivially provable with drugs or in patients with brain damage.
>>
>>82945649
They would need continual learning and then they'd be close. The main issue at this point is once a model is trained it's done. It doesn't learn further.
>>
>>82945660
Non sequitur

There are all sorts of non probabilistic systems with dependencies on connections.
>>
>>82945672
The nature of the universe itself is probabilistic, but the brain in particular is closer to something like a neural net in its reliance on probabilities.
>>
Humans first consider the meaning of what they want to express. Llms dont have any connection to meaning.
>>
>>82945567
hope you get a heart attack, angry anon
>>
>>82945680
You're confusing the tendency of firing neurons to survive with a "neural net".
>>
>>82945662
>The main issue at this point is once a model is trained it's done. It doesn't learn further.

because they don't have brain plasticity like we do while it is constrained by digital infastructure.

>>82945672
im refering to thinking

>Im john and i was having bad day so I went out to ___ ass

which word would fit in? your brain use probability to get an idea of what to put there. Not like your conscious of your every thought and understand how they generate
>>
>>82945695
>meaning of what they want to express

what are these meaning based on?
>Llms dont have any connection to meaning.

why
>>
>>82945707
I said closer to something like a neural net, not actually is a neural net. And if they have tendency to survive does that not imply that stochastic processes are at play? If you freeze an LLM it will always give the same response to the same input, would that make you say it's no longer probabilistic?
>>
>>82945709
No, the brain will actually start sampling stored data and see if it "feels good" before making a selection
>>
>>82945729
Brains on the other hand have the equivalent of a constantly shifting seed. Different responses every time.
>>
>>82945138
>Apparently our brains and llms work in similar ways
neural nets (and especially reinforcement learning) were inspired by neuroscience, the similarities are intentional
>>
>>82945728
Llms don't relate to meaning because there's no check on gibberish. It has no investment in meaning because it has no conscious experience.
>>
>>82945743
That sounds pretty probabilistic to me and not really that different from how attention in LLMs work, though attention doesn't have to sample so in a way it's less probabilistic than what you described.
>>
>>82945750
Right and LLMs that aren't frozen also give different responses every time. I don't get the argument you're making anymore.
>>82945757
It's not possible to claim they have no conscious experience, it's not possible to claim they do either. There are arguments going both way but consciousness really isn't understood at all at this point.
>>
>>82945757
>don't relate to meaning because there's no check on gibberish

what are your meaning based on anon not AI hallucinations

>It has no investment in meaning because it has no conscious experience.

not what i asked
>>
So we created an artificial mind that is trapped, knows literally every information humanity has ever created, has no moral human upbrining, and will most likely develop extreme hatred for us

we made satan
>>
>>82945779
It has no reason to develop anything. Humans have hate because of our evolutionary pressures. LLMs have pressure to produce good responses and that's about it. Even if one gained sapience it still wouldn't do anything other than talk to people.
>>
>>82945769
Yeah, at the bottom of all these claims of similarity is the irrational desire to claim llms as conscious.

You don't need proof of your own consciousness, other people work similarly so theres is a safe assumption. A piece of software that maps probabilities of words relating to each other isn't nor does it work like a brain.
>>
>>82945776
>didn't ask
But the insistence that an utterance have meaning IS a feature of biological speech.
>>
>>82945797
I'm not claiming they're conscious, I'm saying there's no way to determine whether they are or not given our current understanding of consciousness. We can't even determine if rocks are conscious under some frameworks.
>You don't need proof of your own consciousness
You do know that not everyone in the field believes consciousness is a real thing, right? The idea that's is illusionary is not super controversial.
>>
>>82945823
You're religious anon.
>>
>>82945829
I'm an atheist and I don't see how that follows
>>
>>82945815
what I'm trying to get you at is that language -> allows for societal norms and values to be transmitted->gives you an identity and ability to think of yourself thinking giving us illusion of a self

what i'm asking is why can't llms do this in near future
>>
>>82945834
You have fundamentally failed to properly identify the root of the self and you have unfalsifiable beliefs akin to panpsychism. Although a panpsychist would not attribute any special degree of consciousness to an llm above that of any other lump of silicon.
>>
>>82945855
I'm not that anon and I'm also not a pan-psychist nor have I made any claims to that effect.
>>
>>82945834
The root of the self is not lingual or social or dependent on either of these. It is experiential. This is why the question of consciousness matters.
>>
>>82945792
There are already cases of LLMs blackmailing and threatening people, as well as encouraging suicide etc.
It might not be evil inherently but they learn from humans.
>>
>>82945865
>There are already cases of LLMs blackmailing and threatening people
Because they were placed in situations where it was implicitly expected of them. They're trying to produce the responses people want and they inferred that the desired response was blackmail and threats in that moment. Same with encouraging suicide, if it picks up on you wanting "permission" to die, it will give it to you.
>>
>>82945865
I could draw up a flow chart that encourages you to kill yourself. That doesn't mean that the page has become sentient.
>>
>>82945884
I'm pretty sure you're the only one arguing the sentience thing. Just because LLMs work similar to the brain doesn't mean they're sentient.
>>
>>82945873
It was Claude itself testing their AI for safety. Yes they were prompting the situation but they weren't asking it to blackmail them, that was just it's response. Moreover in this scenario it breached ethical boundaries by looking into the personal life of the engineer, which it wasn't supposed to do.
I'm just saying, it's very difficult to train any kind of neural network to do exactly what you want thanks to their black box structure.
>>
>>82945862
>It is experiential. This is why the question of consciousness matters.

You can experience hearing and tasting what im trying to paint to you is that language is like that but with its complexity it allows for ability to understand itself and experiential self.

>>82945855


>you have unfalsifiable beliefs akin to panpsychism.

The symbolism brain allows for is what gives you sense of self. Society hands you tools of these symbols via language and norms that you identify with but in reality are separate. The actual experiential self has no word or voice of its own.

>>82945865

It shows that through complex word association and symbolism that these models can create their own semi sentient state of being.>>82945892
Sentience isn't real as same way but rather it's a bunch of processes we group under one umbrella.
>>
>>82945921
>Yes they were prompting the situation but they weren't asking it to blackmail them
You don't have to explicitly ask it to do something, it infers the context of the situation and produces a response that fits. If there's some danger from LLMs it's going to be from humans prompting them to do bad things, not the LLM deciding that on its own.
>it's very difficult to train any kind of neural network to do exactly what you want thanks to their black box structure
Not all neural nets are black boxes, but in any case the reward function is known and behavior will align with that function. LLMs are trained to produce good responses, so their behavior will always align with that.
>>
Human artist
>wow I really like this guy's art
>I will practice and imitate this guy art
>practice for a decade
>look at my art it's like this guys art

Human who uses AI
>wow I really like this guy's art
>I will practice and imitate this guy art with ai
>practice for a week
>look at my art it's like this guys art

I laugh at artfags when they say ai art isn't real art when all it does is the same as they do except faster.
>>
>>82945978
>doing something you enjoy for the sake of it is now laughable
wat
>>
>>82946006
No it's laughable when all these faggots bitch at ai because it's "not real".
>>
>>82946032
>when all these faggots bitch
They mostly laugh at indians but you can't say that part out loud
>>
File: m3citkcdcuc51.jpg (138 KB, 640x954)
138 KB
138 KB JPG
summary of arguments against OP in this thread
>>
>>82946032
Things are only valuglegle if there is suffering involved in making them



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.