At this point I think the newer models are smart enough that if you ran like 100 thousand instances of them for long enough you could achieve AGI in a year or so.They're not even more intelligent than a junior engineer, but the issue is that the average person can do very little real intellectual work over the course of a lifetime. LLMs are like if you took an immortal halfwit and forced it to take Adderall until it wrote you Shakespeare. I guess it's happening just like AI 2027 lol. [spoiler]Except God will intervene with some bullshit slowdown like a Taiwanese tsumani or a coronal mass ejection or COVID 2 or some shit because we are meant to rot and wither away instead of living in peace and abundance[/spoiler]
>>107885054Cmon, at least reply to my thread with some antisemitic remark about OpenAI and say that the bubble is gonna burst in 2 more weeks
OpenJews bubble gonna burst. Just two more weeks my fellow ludditeNigs.
>>107885054If you crammed 1,000 80 IQ jeets into a room, you don't suddenly get a 250 IQ SuperJeet. You just recreated an irl Durgasoft Java lecture room instead. Your theory is fatally flawed.
>>107885054Automated Idiocy cannot gender intelligence.
>>107886134>SuperJeetKEK
>>107886134Yeah but the newer models are more like a autistic savant rather than a simple retard. Completely stupid at some tasks and half decent at others. Claude, given the same amount of time, is better than 99% of all humans at programming and probably better that at least 50% of programmers. A Claude Swarm can eventually bumper car into something decent.
>>107886206Computers aren't thinking. A computer don't have a will, desires, hopes, dreams, it doesn't love, it doesn't hate - there is no thing in itself sat at the core of an LLM. For that reason alone it will never come up with anything on its own. All it is doing is executing an algorithm which is designed by humans. Any emergence of novelty is just something akin to meta programming done on those programs which is again only enabled because the process to change its internal state is something designed by a human. LLMs are completely bound by the quality of their training data, which again is reliant on humans to create that data. A LLM is essentially a better Google search/front end for a database.Even if a LLM did something novel, it wouldn't even know that it did. It would need a human to recognise that X output is something new and something useful because there is no self governing that LLM. It's funny to see compsci faggots think sentience is reducible down to the total sum of integrated information with 0 proof that is the case (even proponents of that theory like Koch has recently admitted that it is wrong). More data, more gpus and a useful LLM that can do more than take input X and apply an instragram filter to it will come in 2 more weeks, I just need you to print me another 500 billion dollars. >imb4 muh chess bot, muh protein folding bot Simply executing an algorithm made by humans. We already know computers are pretty good a executing human made algorithms