Ask any AI hater to explain what's happening in this picture and watch them go quiet and then start repeating some nonsense marxist talking points they've memorized.
>>107676078AI haters BTFO once again
>>107676078can you explain it?
>>107676078>AI hatertrannies redditors and deviantart griftersthere is literally no other demographic hating AI
>>107676078...overfitting?
>>107676078log scale>>107676117always
>>107676117That's only this part of the graph. Why are you ignoring the rest?
>>107676078So what's your point here?
>>107676078first it didn't know then it learned
>>107676210I think he's trying to disprove 'it only learns patterns in training data' by showing that it can get 'smarter' even after it already figured out the training data
>>107676078I don't know whats happening in that picture.
>>107676112sooooooooo much this!!Jesus himself would be vibe preaching if he was still around
What the fuck happens here?
>>107676111ofc not. This is a homework thread.
>>107676078>picture of AI shit>why can't AI haters identify this?!?why would an AI hater be doing AI training?
Plebs vastly underestimate the amount of data intelligence requiresit's not the brute force approach is impossibleit's that you need a million times more data than its currently available
>>107676299Quantum entanglement
>>107676547would help if they didn't filter 99% of available data out for being "toxic" of some kind, ie not politically aligned with the lab.
>>107676078>hurr durr i just read deep double descentop is a nigger
>>107676078>generalizing a trivial function only needs a thousand TIMES more steps than learning the training data>still doesn't reach 100% of accuracy, but who needs your calculator to be able to calculateImpressive, very nice.
>>107676617you're supposed to call the tool to do math bro, it's for text not maths
>>107676547>it's that you need a million times more data than its currently availableThe higher-order the thing you're looking for, the more data current AI needs to find it. Not sure if it's quadratically more or exponentially more.But sure, let's not put any effort into finding a more efficient method.
>>107676111>can you explain it?It's called "grokking". You get 0 error on some artificial training objectives (modulo arithmetic here), if you train way past what you normally would.