[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/sci/ - Science & Math


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


> be me
> looking into AGI startup
> ask the developer if their AGI is real or just a fancy wrapper for a chat model
> they don't understand
> pull out illustrated diagram explaining what is AGI and what is wrapper
> they laugh and say "it's a real AGI, sir"
> run evals
> it's just a fancy wrapper for a chat model
>>
>>16262037
Real AI is centuries away
>>
Frog thread bumped
>>
I wish people would actually research pre chat model rlhf base models, they are so much better. Unfortunately using them requires talent so the skill floor is higher.
>>
>>16262037
how about you go back to the 'ddit
>>
Man what a shitty boring stale meme. Did you steal this from twitter?
>>
>>16262146
got any good links or resources? i crave AI research that isn't just "perceptron but large"
>>
>>16262379
no, anon, that's exactly what that still is. it's just modifying the training procedure. while more of a customization of the resulting probability distributions in the output than just feeding data in, it's still just regurgitating a probability function, just now with "reinforcement learning/human feedback" adjustments to it. perceptrons are basically the only architecture we can run or train efficiently, and have been for over half a century.

we actually know biological brains don't work like that, assuming neuron/synapse architecture is the "source" of intelligence. real brains have feedback paths and cyclical structures that perceptrons lack (between each perceptron layer is a complete bipartite digraph that only moves in one direction; there's no way for information to move anywhere but to the outputs - no, backpropagation isn't the same thing, that's a process outside the perceptron to adjust the weights during training), and we legitimately have no idea how to train artificial neural nets with cyclic subgraphs efficiently like we do with perceptrons. we also have no idea how to train even perceptrons in real time. and we've been stuck there for half a century.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.