Is AGI conceptually possible? It feels like the entire economy is currently betting on AGI basically becoming achievable and on of the big players are trying to get to it first. But, that seems like betting on a company to achieve immortality or a tech company to achieve time travel. >the entire economy is reliant on a myth>the myth is a artificial secondary being that's smarter than the one that created it.>that myth isn't conceptually possible >mfw the economy crashes and the the close-to-AGI chatbot is trying to explain to me using my own white papers how a nescient being cannot create an omniscient one and basing an entire global economy on it is as retarded as lending your parents money expecting to get it back.
>>106767252AGI is conceptually possible. We won't achieve it with LLMs and GPUs, but there's nothing theoretically impossible about itCreating a being smarter than its creator is perfectly plausible. I bet you're smarter than your mother
>>106767332came here to type this. kindayoure describing asi, but your point is valid.llms are statistical models. its only natural they'll suck ass at deterministic problems
>>106767252>Is AGI conceptually possible?Yes>AGI basically becoming achievable and on of the big players are trying to get to it firstThey won't. AI breakthroughs come from smart people coming up with new ways of doing things, not from fueling billions into scaling existing models.There was no meaningful AI improvements since GPT2. Only scaling up existing models.
>>106767332>I bet you're smarter than your motherWell my mother wasn't my teacher and my mother didn't design my brain down to the blueprint level. She basically got the starter kit when she got fucked like everyone else.If we were to create an entirely new being at the cellular level we'd be limited by our biological designs because that's literally all we know and how we perceive and understand everything. Neural networks are literally designed to mimic human forms of interconnected thoughts to make new ones. So how can we make it smarter when realistically they'll be just as limited as us. All we can perceive is rehashing existing information to make new stuff. I would assume AGI needs to get like some kind of external input from what we know and what we can make. Religious freaks use spirituality so like god has "divine insight". In science fiction super advanced robots have like ridiculous perception from equally mythical sensors and tech. Even if the latter is conceptually possible none of our tech that it requires exist yet. So AGI currently is still a wet dream of engineers.
>>106767252>Is AGI conceptually possible?Yes, in an infinitive space (performance + storage).Even ASI is possible.>>106767332> I bet you're smarter than your motherDifferent areas of experts can be the case anon.
>>106767362>Yes, in an infinitive space (performance + storage).So does that mean all the tech companies that want investment to achieve it currently are currently peddling it like snake oil?
>>106767361The researcher isn't the AI's teacher. Even our current LLMs learn from huge piles of text. Even if human intellect was the upper limit for human-made AIs, the maximum of human intellect is far above the level of intelligence most people possess
>>106767375>So does that mean all the tech companies that want investment to achieve it currently are currently peddling it like snake oil?Yesn't, it's the typical silicon valley technique, promising high and the product itself being minimal viable. It wouldn't surprise me if they were just going to have "AGI" by having multiple AIs learn a lot of separate topics and then dynamic switching them. It's not real AGI but they will still try to sell it as.
>>106767469That makes sense at least. The biggest red flag to me is what constitutes as AGI. Like it's not like we have a test or identifier for it? If we could conceptualise a test then that would render the AGI useless. And if we have the AGI generate the test how can we verify it? AGI starts to get into some cyclical contradictions that make it impossible to use in an economic model. The investors don't know how to identify it, the engineers don't know how to identify it and the AGI if it does somehow exist how would it be able to verify it at all?
Eric Schmidt literally said that recursively self-improving LLMs are "very close" to being a reality. He's the former CEO of Google so I'm inclined to take him at his word.
>>106767492>self-improving LLMsAre "self-improving LLMs" AGI? and if so how would you test it?
>>106767501I don't know how useful AGI even is as a concept. Humans aren't perfectly general. We can't mentally rotate high-dimensional structures, for instance. Just think more in terms of net capability. RSI AIs will be enormously, mindbogglingly capable relative to us.
>>106767492>someone selling a product is telling me that is product is very close to being even better product>I better believe him, he made that product so he must know a lot about it
>>106767501>Are "self-improving LLMs" AGI? and if so how would you test it?No, they aren't this is once again the silicon valley thing, good llms will be used to write code for creating other llms this won't make it AGI, AGI stands for Artificial GENERAL Intelligence, they will just water down the meaning so every good llm will count as AGI. did we forgot openAI claimed GPT-4 has a bit of AGI, that's what I mean.
>>106767492>recursively self-improving LLMsWe don't even have that
>>106767515>I don't know how useful AGI even is as a concept.It's useful because they know about everything and they can 'think' of things we haven't thought of but is possible to make. That's the value.
>>106767492>Eric Schmidt>llmsits not his domain. he knows shit about themthe closest thing he has to do with llms is a lexical analyzerthats a fancy name for a code generatori wrote one too. i generate opencl code with itbut i also deal with stats and he doesntthats what i use said opencl for
>>106767576>they can 'think' of things we haven't thought of but is possible to make.Well thats my issue. How can we create something that will be able to think of things outside our realm of understanding? Like if our entire mathematical models are built on our perception of them? The "AGI" we make is going to be limited by it's entire LLM being based on us. So AGI ISN'T possible because we don't know how to make it or what it is. >Tech companies have repeatedly been selling us models that tell us the answer is 42. >And every investor is a retard that didn't understand why the machine said 42 in hitchhikers guide. It's a stupid answer for a stupid question.
>>106767751I only think this can apply to things we know we don't know, maybe it could be helpful for finding things like greek fire and other things we lost. But I don't know how an LLM is supposed to be doing that.
>>106767252It's possible. If humans can do it, we can eventually build a machine that can make use of the same physics to do it as well. It may take us a while to figure out those physics and the fabrication process for such a machine, but this is really just the process of science and engineering. Eventually we will figure it out if we think we have to.That being said, the invention of AGI would essentially make human labor obsolete, which is incompatible with capitalism. Markets would collapse as real wages drop to zero, even though the AGI robots can produce everything that we need with zero human input.For that reason I don't think the developments necessary to actually reach AGI will ever happen so long as we practice capitalism; the pursuit of mass labor reduction through high automation will collapse our markets long before such automation is intelligent enough to have the property of "AGI". To progress to AGI we'll have to abolish the capitalist conception of commodities, the confusion of use-value and exchange-value, and therefore money.So conceptually possible but incompatible with our present economy.
>>106767252it's a lie to get funding to scale data centers. that's all
>>106767252>unamused.pngジト目
>>106767252There's merit in the claim that no one really knows what the fuck happens if you stack intelligence. The problem is I don't think these people are stacking intelligence and instead are trying to incrementally squeeze every little bit of intellect out of an inanimate toaster. You need an observer whatever the fuck that means to you to be an intendent decision maker for AGI without it all we're doing is burning down the environment for a really neat goon interface.
>>106768490>You need an observer whatever the fuck that means to you to be an intendent decision maker for AGYeah. For me I don't think a human can create a independent intendent without contradicting what an AGI is. But i think at this point tech CEOs have changed the definition of what AGI is. I believe they're selling AGI as a multi agent system that can do the work of an enyife office from a single server.
>>106768532> that can do the work of an enyife office from a single serverThey're still going to make us drive into the office to watch the AI do its thing so the commercial real estate market doesn't collapse. Human engineering you see.
>>106767252AGI isn't even an empirically falsifiable concept.
>>106767252no it's a bunch of nonsense. it's basically praying to god for spastic nerds.