> be me> looking into AGI startup> ask the developer if their AGI is real or just a fancy wrapper for a chat model> they don't understand> pull out illustrated diagram explaining what is AGI and what is wrapper> they laugh and say "it's a real AGI, sir"> run evals> it's just a fancy wrapper for a chat model
>>16262037Real AI is centuries away
Frog thread bumped
I wish people would actually research pre chat model rlhf base models, they are so much better. Unfortunately using them requires talent so the skill floor is higher.
>>16262037how about you go back to the 'ddit
Man what a shitty boring stale meme. Did you steal this from twitter?
>>16262146got any good links or resources? i crave AI research that isn't just "perceptron but large"
>>16262379no, anon, that's exactly what that still is. it's just modifying the training procedure. while more of a customization of the resulting probability distributions in the output than just feeding data in, it's still just regurgitating a probability function, just now with "reinforcement learning/human feedback" adjustments to it. perceptrons are basically the only architecture we can run or train efficiently, and have been for over half a century.we actually know biological brains don't work like that, assuming neuron/synapse architecture is the "source" of intelligence. real brains have feedback paths and cyclical structures that perceptrons lack (between each perceptron layer is a complete bipartite digraph that only moves in one direction; there's no way for information to move anywhere but to the outputs - no, backpropagation isn't the same thing, that's a process outside the perceptron to adjust the weights during training), and we legitimately have no idea how to train artificial neural nets with cyclic subgraphs efficiently like we do with perceptrons. we also have no idea how to train even perceptrons in real time. and we've been stuck there for half a century.