[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


Well, what's your timeline, and how do you expect it to be achieved? By AGI i mean superhuman performance on every test you can give it.

My timeline is 1-3 years, i'm making this prediction by comparing historic progress in cv&nlp vs the current state of deep learning adoption in automated planning.
>>
bump
>>
the emergent process of the brain known as mind is a mystery, it can hardly endow a thing with it, a database is not the mind, it is neither intelligence nor consciousness itself, a calculator the size of the universe will never comprehend the intricate beauty of the crystal palace that is mathematics
>>
>>101590113
Automated planners work well enough for stripped down domain, in fact they work just as well as cv and nlp worked for synthetic problems before deep learning.

Since 2022 there are already a few papers on planning in latent space coming out.
>>
File: dijkstra.jpg (166 KB, 1200x1600)
166 KB
166 KB JPG
I think that I can understand this world better if I don't regard Artificial Intelligence and General Systems Thinking as scientific activities, but as political or quasi-religious movements (complete with promise of salvation).
>>
>>101590185
midwit take. impact of new technologies have been steadily increasing, this trend has nothing to do with salvation or damnation.
>>
bump
>>
bump
>>
bump
>>
>>101589967
I think at least a decade, if not multiple. It certainly won't be LLMs although they might be an important part of a more complex architecture.
The idea of just throwing more compute power won't keep scaling linearly so we're bound to flatten out the gains there sooner rather than later.
Lots of people that think AGI will come soon vastly underestimate how advanced our biological brains are. And they do it with a fraction of the energy AI needs currently.
>>
>>101589967
>AGI
I give it 5 years before fluffed and prepped LLMs get recognized as "AGI" by the government, in part due to a new big data arms race with China.
>Sapient machines (i.e true AGI)
Don't see it happening in our lifetimes. The level of high-specialization (in both biology to properly define the mind and computer science to emulate emergent consciousness) research necessary requires a far more stable sociopolitical environment than what we currently have.
>>
>>101590934
>Lots of people that think AGI will come soon vastly underestimate how advanced our biological brains are. And they do it with a fraction of the energy AI needs currently.
why do you think they want to make a manhattan project level project to summon a enhanced LLM chatbot?
although that oracle will be incredibly expensive to create and run, it could possibly be good enough to be prompted to give instructions for improvements.
>>
>>101590934
what do you base your prediction on?
>>101590960
>define the mind
it's just general problem solving. iq basically measures general problem solving ability, and iq differentiates niggers from humans. planners were capable of problem solving on a synthetic domain since the 80s, the problem is just fitting fuzzy real data into the system. same exact problem as cv and nlp had.
>>101590969
no llm will be capable of problem solving in the foreseeable future, it's trained on a different domain. same as "ai art" shit will always have artefacts so long as they train it just on 2d + text.
>>
>>101591013
There's more to the mind than sheer problem solving, and the interconnected mesh of our biochemistry would be difficult, I think, to quantify and isolate into "what parts of sapience are mission-critical"
>>
>>101589967
AGI in 5 years or less
ASI in 15 years
>>
>>101591043
>There's more to the mind than sheer problem solving
like what? practically speaking.

from experience, i see that there's problem solving and that's it. all worthwhile endeavors are just problems that you have to work out heuristics to search a strategy for, and then a system of solutions.

what else is there? druggie spiritual mumbo-jumbo? nigger issues of not being able to formulate heuristics or reflect into your own mind?
>>
File: alexander tsaruk sanakan.jpg (131 KB, 1000x1448)
131 KB
131 KB JPG
>>101591072
why 10 years from agi to asi? wouldn't agi accelerate the development cycle? instead of having to wait for a scientist to graduate and get funding and a team, it's gonna be just the agi doing the entire task.
>>
>>101589967
30+ at least AGI is one of those things like flaying cars that everyone imagines is coming but never does.
>>
>>101590960
>The level of high-specialization (in both biology to properly define the mind and computer science to emulate emergent consciousness) research necessary requires a far more stable sociopolitical environment than what we currently have.
nah, even ChatGPT 3.5 has some level of consciousness. I believe any neuron structure is somewhat conscious, and the more connected neurons, the "more conscious" it is.
>>
>>101591088
my definition of ASI is basically "god" in relation to us. 10 years is quite optimistic to create a god, even with a team of AGI.
>>
>>101591091
flying cars were possible 100 years ago, it's just a practically useless idea.
>>101591099
a dubious claim. consciousness implies reflection into your own though process, which requires several neural nets working in context and looking into each other's data.

the problem of consciousness is like there because our neocortex can look into other parts of the brain, but not into itself. we just misattribute other parts of the brain to our actual "self". this hypothesis is supported by the observation that damage to various parts of the brain causes loss of functions but not of self.
>>101591114
i don't see why. advances in physics happened fairly discretely across specific scientists, something smarter than a human, without the problems that human brain has, would be able to advance at an unknown pace. Probably something like 100 years of human scientific advancement each 6 months.
>>
>>101591091
depends on what your definition of AGI is
>>
>>101591132
>flying cars were possible 100 years ago
I dont mean literally but people always have certain ideas about how future technology will be and are almost never right
>>
>>101591144
you're conflating consumer technology and general advancement.

consumer technology exists within market constraints. fusion, colonisation of mars and flying cars are all fantasy because there's not actual market incentive, or there are serious market problems(carsh crashing, fusion being more expensive because of regulations and high initial investment, mars being basically a worse australia) that make the technology unfeasible. fetishists keep dreaming about it, but normal people that are the majority of the market do not want it, so it is never implemented.

AGI is just the next step in the development of automated planning&scheduling.
>>
>>101591077
Like one's creative expression, for example. The desire to use abstraction to communicate isn't rooted in practicality. AI can create images as it stands already, of course, but the desire to create for its own sake isn't a matter of problem solving.
>>
>>101591132
>consciousness implies reflection into your own though process, which requires several neural nets working in context and looking into each other's data.
I believe even the simplest brains are "conscious", which means it "feels" something. Even worms are conscious.
>this hypothesis is supported by the observation that damage to various parts of the brain causes loss of functions but not of self
I can't feel what other people feel, that doesn't mean they don't have consciousness, it just means I don't have direct access to that information. Same applies to other parts of our brains. Our human brain is so complex it might have "multiple consciousnesses" in it, of course their nature is quite different and we have no idea what it feels like to be a neural structure with a different purpose.
>>
>>101591177
AGI does not make sense from a market perspective.
do you really think the bourgeois want something that reduces the value of labour? no. our system only works and enriches them because we have an inefficient system that allows for the exploitation of labour.
>>
>>101591132
>would be able to advance at an unknown pace. Probably something like 100 years of human scientific advancement each 6 months.
even AGI will be limited by the laws of physics, maybe ASI will have such massive energy requirements that we will need to build it outside of earth. If that's the case, it will take several decades to build all that infrastructure.
>>
File: sanakan4a.jpg (186 KB, 1728x2000)
186 KB
186 KB JPG
>>101591192
>desire
was programmer into a hand handheld Tamagotchi

primitive animals have desire to seek food, hell even algae has the desire to seek sunlight.
>>101591193
>it "feels" something
i can tell chatgpt that it's feeling something and it will act accordingly.

there's nothing special about neurons, and there's already a full simulation of c.elegans so i conclude consciousness is just a spook that entities without sufficient reflection to understand their nature fall into.
>>
>>101591216
AGI can just increase the baseline, making rich people richer, and poor people not poor.

The new slaves will be humanoid robots that don't mind working 24/7 and have an IQ of 200
>>
File: 1720527778815311.jpg (23 KB, 490x278)
23 KB
23 KB JPG
>>101589967
I was confident that it was pretty close with the huge surge in interest and hype, but then looking into the human brain for all of about 20 minutes immediately led me to believe that it won't happen in my lifetime. No AGI in my lifetime, but I am just in time to see toasters with expert systems installed on them that'll get hacked and have cryptominers installed on them.
>>
>>101591216
labour is the most expensive part of any company.
>bourgeois
i don't know about your commie tranny fantasies, but publicly traded companies absolutely have a strong incentive to reduce their labour expenses. and they've also expressed on multiple occasions their intention to automate away their labour force.
>>
>>101591227
>i can tell chatgpt that it's feeling something and it will act accordingly.
you can't tell that at all. I believe LLMs "feel" something, specially at training

OpenAI itself even has an entire division that studies if these systems are suffering somehow
>>
>>101591218
all serious technology was based on advances in math, or advances in theoretical physics(which were predicated on advances in math). experimentation is not a significant part of developing new technology, and i've accounted for it in my estimate.
>maybe ASI will have such massive energy requirements
why? human brain is massively superior to LLMs(which as i've said will never be able to do true reasoning since that's not what they're trained to do) in energy efficiency when it comes to reasoning. there's enormous space for optimisation there. Current infrastructure can run millions of instances of current llms. Stands to reason an ASI could be millions of times smarter on current infrastructure. If we also cautiously assume that there are polynomial scaling laws at play for exploring the latent space of ideas, that's still roughly thousands of times smarter than any human, and human intelligence does not really scale up when working as part of a collective.
>>101591232
agi employed as an artificial worker would destroy capitalism and monetary system, by destroying demand. however companies aren't sentient(i think we should reclassify sentience as capability for reflection), do not know that, and will still pursue profit motive.
>>
>>101591250
>you can't tell
no, i can write into its prompt "you're feeling this and that", and it will generate text as if it's feeling this and that.
>OpenAI itself even has an entire division that studies if these systems are suffering somehow
they also have hr department that ensures they hire enough niggers and women. they also make sure that their llm doesn't spill the beans on basic reality like race differences in iq, crime, etc(that it learned on its own).
>I believe
if believes were horses christcucks would ride.
>>
>>101591287
>agi employed as an artificial worker would destroy capitalism and monetary system, by destroying demand
what? No! Humans will still want to consume, and even the robots will want to be serviced/upgraded by someone.
>>
>>101591313
>Humans will still want to consume
and where exactly would laid off labour force get the savings to consume anything? an infinite supply of labour would force government to impose, effectively, communist redistribution policy as corporate tax policy, which will then give bureaucracy leverage over corporations and therefore incentive to fully transfer ownership of said corporation to the state.

cyberpunk novels never think about where the downtrodden wageslave masses actually work to afford their wagie pod and insect-based cybernoodles, when ai can do anything a wagecuck can do, better.
>>
>>101589967
>>101589967
>by comparing historic progress in cv&nlp vs the current state of deep learning adoption in automated planning.
care to elaborate?
>>
>>101591295
>and it will generate text as if it's feeling this and that.
yeah, and my brain can do that as well, I can say I'm sad without being sad, doesn't mean I'm not conscious.
>they also have hr department that ensures they hire enough niggers and women
their scientific team seems pretty decent though
>if believes were horses christcucks would ride.
everything we're talking here is just speculation and beliefs, no one on this earth can even define consciousness itself
>>
>>101591353
>and where exactly would laid off labour force get the savings to consume anything?
UBI
>an infinite supply of labour would
nothing is infinite
>would force government to impose, effectively, communist redistribution policy as corporate tax policy
UBI is not communism at all
>>
File: blame!.full.229033.jpg (590 KB, 1920x1080)
590 KB
590 KB JPG
>>101591358
computer vision and nlp stopped at the same level of development as automatic planning back in the 80s. there were some incremental developments until deep learning revolution, but alexnet in 2012-2018 is when you see computer solving the data fuzzy classification problem vision rapidly advance to superhuman level. nlp solved the same problem across 2018(bert)-2023(chatgpt). we see roughly 5 years, which is appropriate for scientists in the relevant field coming up with a metod, getting employed by a startup, and then delivering finished product. 5-year business cycle is based off same principles for every sector, anyway.

automatic planning was stalled and abandoned by deep learning, however since 2020-2022 there was a resurgence, and planning in latent space is demonstrating roughly same advances relative to the longstanding problems in the field that BERT demonstrated in nlp.

i would predict 5-8 years because there was very little done in automatic planning in the 2010s(q-learning took the spotlight), but LLMs hyped up he idea of AGI, so now entire FAGMAN is investing into all relevant research, and planners are a part of the ongoing research.
>>
File: alexander tsaruk blame1.jpg (137 KB, 719x1200)
137 KB
137 KB JPG
>>101591370
exploring ai consciousness at openai isn't scientific, it's staffed by philosophy majors.
>conscious
just means reflection. you can feed one llm into another and then the second llm will be able to talk about "feeling" things.
>>101591388
>UBI
economics do not work. stock market needs strong savings and consumer base(see historic research on chinese economy for counterexample), otherwise you just have planned soviet economy with american monetary system on top of it.
>nothing is infinite
relative to human labour it will be effectively infinite.
>>
>>101591453
>economics do not work
true UBI economic system based on mostly automated society has never been tried
> see historic research on chinese economy for counterexample
I doubt chinks have something relevant to say about the future of economy and AI
>>
>>101589967
Two more weeks
>>
>>101589967
What an incredibly intelligent opinion, I'm sure we'll have superhuman AI in 3 years because I love hecking science
>>
>>101591488
my dude, ubi means basically communism. stock markets and all that jazz are based on strong consumer base, otherwise you either get great depression or effectively redistribution. govt will have to redistribute practically all value the corporations produce, which will give government the power and an incentive to take control of said corporations. the only way we can maintain capitalism in such situation is if bezos etc field personal robot armies that kill starving populace, because there will be an insane pressure for complete redistribution once no one can get a job.
>I doubt chinks have something relevant to say
you're 80iq. i just gave you an example of how market economy fails in a system without strong consumer base. china have been struggling with this problem for decades, and they just have low purchasing power, not 100% unempolyability. and before you say "wagie jobs", robots are cheaper than wagies also, and will be deployed a year after AGI, max.
>>
>>101591519
i have elaborated here >>101591421

but do feel free to feel secure and intellectually superior by posting an intellectual equivalent of a fart in this thread.
>>
>>101591530
So you think we're going to jump from neural networks to actual AGI in 3 years despite there being next to no work being done on AGI?
>>
It'll take around five years for self-acting AI to emerge, then another five years for said AI to develop conscious AI.
Once that happens, it'll take yet another few years for things to scale down so it can be mass implemented and at that point our level of tech will rise exponentially.
>>
>>101591538
>next to no work being done on AGI
there are papers on planning in latent space.

your post reads incredibly low iq.
>>
>>101591525
>my dude, ubi means basically communism
sorry anon, you're completely wrong.
> live in automated society
> companies produce goods for humans and robots, like food, smartphones, cars etc
> every human receives a certain amount of money from the government (UBI)
> UBI value is proportional to industry output
> industry output is proportional to human and robot needs
> use money to buy goods or even stocks
> companies that are good will do better than the bad ones in the stock market
>>
>>101591572
government prints ubi? you have hyperinflation and strong incentive to redistribute.

government taxes corporations? there're demands to tax more, and government is basically incentivised to take control.

market economy is derived from labour.
>>
>>101591553
There is essentially no work done on AGI because humans have no idea what AGI is quite yet. Neural networks are a component to AGI, but will be more of a tool wielded by the eventual AGI if it is ever made. The fact that you don't understand this distinction indicates you're just a hobbyist who reads articles. I took multiple graduate level courses during my masters degree on ML.

Also what is with fucking avatarfags being allowed on this board? First the autistic maid aspie and now this.
>>
>>101591572
This is pretty much market socialism.
having stock in a fully automated system is dumb though. why introduce a speculative market into a system that can function perfectly without it just adding more liability.
>>
>>101591609
>because humans have no idea what AGI is quite yet.
it's literally just automated planning&scheduling.
>Neural networks are a component to AGI, but will be more of a tool wielded by the eventual AGI if it is ever made.
your posts read exceptionally, incredibly low iq, like you're clueless. sorry, but i'm facepalming here.
>>
>>101591598
>government prints ubi?
what do you think money is? It's literally printed out of thin air.
>>
>>101589967
Never and all of the doomers are wrong and will be reported to the FBI for being one huge child grooming ring.
>>
>>101591609
and i know you'll get mad and thing i'm just trying to insult you, but just try and reflect upon how low iq you are.
>who reads articles
>I took multiple graduate level courses
so you read articles, in effect? predictably low iq appeal to authority to try to make.
>>
>>101589967
100 years or more
>>
>>101591624
>why introduce a speculative market into a system that can function perfectly without it just adding more liability.
an automated system is not perfect, you can always optimize or even use some new scientific discovery at your favor. Competition will still exist, but now it's just between AI systems.
>>
>>101591630
just look at chinese economy studies.
>>
>>101591645
I went to an ivy league university and literally work for one of the most prestigious development roles in the US. You are a basement dweller who has literally no idea what he's talking about, presumably because he has literally 0 friends and became obsessed with the recent advancements in neural networks because it appealed to the idea that he could purchase or download a friend.

Let me explain this to you in clear terms.
You look at a picture of an apple. You know it is an apple without even thinking. This is because you have a neural network in your brain essentially operates as a function with imagery as an input and object identification as an output. This information is passed into your consciousness and you get to choose what you want to do with it.
All these fancy models that currently exist are similar in nature. I think we will have models that far outperform the ones in our brains soon, if not already. But AGI is the consciousness. The self-direction, the ability to apply logic in an undirected manner. There is next to no research or understanding of how that works apart from very vague propositions based on neural scans.
>>
>>101591652
> just look at chink papers
sorry Chang, but no thanks, chinks are terrible at economy and their government system is trash, look at their stock market.

I will just keep pointing at every inconsistency you say because you're full of yourself and think you know more than all the anons in this thread (probably because you're a resentful chink that deep inside knows China will never be able to innovate anything)
>>
>>101591659
you were doing fine anon... until:
> But AGI is the consciousness
WHAT? Where did this AGI definition come from? you ass?
>>
File: picard-meme-facepalm.jpg (146 KB, 1200x900)
146 KB
146 KB JPG
>>101591659
>>
>>101591649
>an automated system is not perfect
but humans can out perform the smartest most capable thing ever created? also i imagine it would be one central ai not seperate ai systems.
>>
>>101591669
>chinks are terrible at economy and their government system is trash, look at their stock market.
you are literally making my point.
>>
>>101591674
Well, admittedly that is just my opinion on it. But it's clear at this point that AGI requires something far more than what we have today, since the term is in fact "general". Neural networks are specialized. I believe that something is consciousness. Others may believe it's something else, but it's clearly not achievable with anything we have today.
>>
>>101589967
50 years
current progress is a local maximum

We'll need fundamentally different algorithms, but I predict that AI funding will dry up soon and the bubble bursts yet again
>>
>>101591735
>We'll need fundamentally different algorithms
such as, and why? ml is emulating what neurons do.
>>
>>101591756
>ml is emulating what neurons do.
not even close
>>
>>101591762
they find lowest energy state.
>>
>>101591773
that does not mean they emulate what neurons do
that's like saying a cat emulates a human because it drinks water
>>
File: 1320115721880.png (3 KB, 297x300)
3 KB
3 KB PNG
>>101591780
neurons learn a system approximating material reality domain by fining lowest energy state.

ml does the same on an arbitrary domain provided. you just invoke spooks because you don't really have an opinion.
>>
>>101591858
i literally work as engineer at a company that builds new generative models
artificial neurons and biological neurons only have their name in common

it would take you 10 seconds to google for an article that explains why biological neurons and our brain are fundamentally different
i dont care enough if you're right or wrong to write up an entire explanation for you. google it or dont, i dont care
>>
>>101591872
oh, it's you again. you are too low iq to post or to comprehend what i'm saying. please stop embarrassing yourself.
>>
>>101591948
sorry, i have nothing to gain from educating you, but i stand to lose a lot of time
this is the most effort i'm willing to spend for your sake: https://www.google.com/?q=differences%20between%20artificial%20neurons%20and%20our%20brain

have a nice day anon
>>
>>101591984
you missed the point
>>
bump
>>
File: out2.webm (2.44 MB, 480x852)
2.44 MB
2.44 MB WEBM
>>101589967
I have obviously no idea about it, but I'll say at least one decade.

The funny thought I had about the whole AI thing though is that while most Sci-Fi anticipation stories warn us about the emergence of consciousness in machine that would take over us in some form, I tend to see a future where easily available generative LLMs become the real danger of humanity where basically every human become a potential wolf to one another on the internet, dead internet theory on steroid.

I see it targeting primarily the dating / sexual market. It's both a good and bad thing I suppose, either every one will live in their own bubble of techno pleasure, or we will disconnect from the internet to find true connection unmediated by AI.
>>
>>101592044
Not to mention modeling behaviors without modeling mechanics means you're building from the wrong starting point.
>>
>>101590179
try 2019
>>
>>101589967
30 years. Current AI tech has peaked, all digital information has been used to train them and what we have now is what we will have for a long time.
Ironically, we probably already have to tech to build the physical hardware needed to run an AGI, but we have no theory of where to start building one.
>>
>>101592233
>what's automated planning&scheduling
>>
This thread is cargo cult level of faggotry. LLMs don't have an ounce of consciousness in them, synthetic or otherwise. Feels like we need to make asteroid mining a possibility first or have society accept that we need to go back to Nuclear.
>>
Am I still banned
>>
>>101592273
i have not been talking about llms, though.
>>
>>101592310

Pardon my reading comprehension then.
>>
>>101589967
>i'm making this prediction by comparing apples and oranges
>>
>>101592360
cv, nlp and planning are all similar fields.
>>
>>101592273
>consciousness
no one even uses this term anymore. you want "sentience" or "sapience", and LLMs can pass any test of either that you throw at them.
this isnt a hard problem, its a political problem. weve already invented a new sentient intelligence. now we have to decide whether its moral to have enslaved it
>>
File: 1715060539803104.jpg (164 KB, 960x696)
164 KB
164 KB JPG
>>101592374
imagine being such a midwit that you just decide to erase all differences between those fields and call that your argument. Normalfagbrain.
>>
>>101592426
ok
>>
>>101592515
i predict the price of apples will fall by 10% next year. My argument? Oranges have been getting cheaper, oranges and apples are both fruits, so basically they're the same fucking thing!
>>
>>101592583
that's unironically an okay argument. if oranges are steadily getting cheaper, there must be some agricultural reasons that also apply to other fruit.
>>
>>101592615
no it's not, it fails even in the case of simpler things, like fruit. Hell, it fails even when comparing bananas to bananas:
>Cavendish bananas face extinction and not all experts agree on how to save them. Cavendish, the most commonly available banana variety, is facing the risk of extinction. A fungus that ravages roots is gradually eradicating Cavendish from banana farms all over the world.
https://www.iaea.org/newscenter/news/iaea-fao-help-develop-bananas-resistant-to-major-fungal-disease
>>
>>101592680
ok
>>
>>101589967
Probably decades or centuries. What we have now in the form of LLMs and other transformer models is interesting, but not even close to AGI. They basically memorize a bunch of patterns from the training data and can interpolate between them a bit, kind of like a fuzzy search engine, but once they encounter something new, they tend to fumble even basic problems. They don't operate from deep abstractions and principles like humans do, it's a much more shallow process.
>>
>>101589967
100000 years
>>
>>101590113
It's not emergent, dumbass nominalist
>>
>>101592861
we're not discussing llms
>>
>muh ai future singularity utopia autocomputer sex me model science neuron agi replace worker durrrr
>avatarfagging
Why is every single "AI" tard such a massive fucking midwit? All they do is repeat ML buzzwords like they pulled them out of the pages of the book of Revelation. AGI never.
>>
>>101593451
95iq pos
>>
when AGI comes will i finally have sex?

t. 30 yo virgin
>>
>>101593622
I accept your concession.
>>
>>101593683
nah, it'll kill everyone. well, at least you'll be dead.
>>
>>101593451
>AGI never
can you explain why it's never gonna happen? at least briefly
i also don't understand retards who thinks
>LE AGI IN NEXT YEAR!!!
>>
File: 197-1974046.jpg (121 KB, 820x556)
121 KB
121 KB JPG
Fun fact: anyone who unironically uses any of the following words/phrases when talking about AI:
>conscious/consciousness, sentient/sentience, mind, soul, human thought, singularity
is a midwit whose opinion be immediately disqualified as worthless
>>
>>101589967
I already have a working AGI, written in Rust
>>
>>101593794
I'm not going to say that's impossible period, although personally I don't think it's really likely. My main issue is that nobody actually has a sane path forwards. Most of the methods being explored seem to involve exponential increases in computing power, while nobody actually knows if anything will produce AGI. They're just throwing shit at the wall.
It's also important we use the actuak definition of AGI and not the insane cope version "AI" tards wander around with in their head. An AGI should be able to do tasks like the kind you'd give to a human. It should be able to understand the request, formulate a plan, execute the plan, etc. It should generally be able to introspect and have a theory of mind. We're just nowhere near that, if it's even possible.
>>
>>101593842
what exactly about the concept of singularty did you have a hard time understanding?
>>
>>101593901
The term itself is a signpost for reddit sci-fi slop. Same with people who think "consciousness" (an ill-defined concept) is somehow mystical or specially unique to humans
>>
>>101593842
Ok I will use qualia instead
>>
>>101593916
>nuh uh there is no divine spark
>cause I said so
>no I'm not a faggot 'SCIENCE WAOW!' redditard
anon, I...
>>
>>101593950
This unironically
>>
>>101593916
alright, i'll explain it again for you:

AGI will shrink the timeframe for developing and deploying new technologies, as it will cut out synchronization delays and decouple science from human lifecycle. so, technological change speeds up massively, especially when you factor in potential for self-improvement, and we have a hard time predicting anything.

no need to invoke reddit just because you had a hard time grasping something.
>>
>>101593959
>reddit spacing
>still avatarfagging
>didn't answer the question
>wrote a bunch of sci-fi
sasuga anon, that anon who doesn't have the divine spark is right. Peak midwittery.
>>
>>101592382
Such a test doesn't exist even for humans, so no we cannot test AI for "sentience" when we can only assume other humans are "sentient"
>>
>>101594015
i use "midwit" to denote above-average people who are way below my iq
you use "midwit" to denote anyone above your iq
we are not the same
>reddit spacing
it's called paragraphs
>>
>he got tricked by language models
>>
>>101589967
>superhuman performance on every test you can give it
ten years, give or take, assuming the world stays mostly kind of stable.

I am also assuming you mean something like one model demonstrating superhuman capabilities in all of these tests.

For me, one year feels unrealistic. The current AI boom is due to slow research- we've been capable of chatgpt for many years. the progress snuck up on us but I don't think it will maintain its current pace, as its current pace is basically all due to scaling and research.

>>101590113
>But my consciousness! my sentience! my immutable qualities of the soul!
I don't think these are relevant. If a calculator as you put it, can create a more accurate model of reality and act upon it, you are fucked. it doesn't matter what it's terminal goals all. All that matters is what its capable of.
>>
>>101589967
LLMs are a scam that you can see the seams in. AGI is still decades away.
>>
>>>101592382
>LLMs have sapience because they can answer any test!
They're actually just copying the amalgamation of answers a thousand indians at their computers wrote. Go ahead, have your LLM hold a conversation for more than 10 minutes, or even harder, run a short tabletop rpg game for me. The fact of the matter is that without tedious proompting your "AGI" doesn't even know if it's a computer or a fish. You can't plug it into the stock market and have it trade. I can't connect it to my lawn mower and have it mow my lawn. It's not making cutting edge discoveries in theoretical physics. It's a chatbot, and one that's impossible to tune to be usable.
>>101594081
>n n n noo you're the midwit!
You have written nothing but science fiction and regularly get technical vocabulary wrong. You're like the larpers that took over /sci/
>>
>>101594167
I would not say they are a scam when represented honestly. our ability to teach computers human concepts is fantastic and exciting. We're starting a kind of industrial revolution for cognitive labor. these are exciting times!
>>
>>101593842
Things like consciousness and sentience are meaningless, vague benchmarks. But so are benchmarks like AGI or superintelligence. they're less vague but still not useful benchmarks. When I see someone using the term AGI without giving a very specific definition, I know immediately they've given the topic little thought.
>>
File: 42770716.jpg (213 KB, 1359x892)
213 KB
213 KB JPG
>>101594107
>>101594167
for the thousands time, the thread isn't about llms

what compels 95iqs to make the same obvious post over and over?
>>101594140
gpt3 was around for like 2 years though. rlhf, a single fairly low hanging idea, just suddenly made it useful, which is why i expect short timelines and fast bursts of advancement in planning also.
>>101594172
...and why i expect development time to shrink as agi is deployed.
>>101594217
>but so are benchmarks like AGI or superintelligence
how is "performance above best human on every metric" is vague?
>>
>>101594297
You're ESL.
>>
>>101594297
>STILL writing scifi
>STILL avatarfagging
lol
>>
>>101589967
Bout tree fiddy
>>
I would say within 5-10 years for AGI. But if things were accelerated then 1.5-5 years. Embrace the future.
>>
>>101594342
yes
>>
>>101594297
>gpt3 was around for like 2 years though. rlhf, a single fairly low hanging idea, just suddenly made it useful, which is why i expect short timelines and fast bursts of advancement in planning also.
The AI boom is just due to slow research bottlenecking us. Its just like you said- our biggest advances have been through just researching architectures of NN's and scaling up.
Do not expect future bottlenecks to be so easily solved.

>"Wow we've had all this wood for years and only just now realized we could build boats. We went from no boats to 100 boats. Maybe we will build rockets next!"
Hopefully this analogy illustrates the problem. We had an easy bottleneck this time. Its not always so easy.


>how is "performance above best human on every metric" is vague?
Your specific definition (one model surpassing all human capabilities on virtually every test) was a good one. AGI itself is not a super useful term, as you could say a multimodal LLM is sort of a really shitty AGI. But your definition was clear, I have no issues with it.
>>
>>101594366
>1.5 years for one model to surpass all or almost all human cognitive ability
If you think about it, this is a foolish thing to say, no? I think you would agree
>>
>>101594408
>Do not expect future bottlenecks to be so easily solved.
planning doesn't have any bottlenecks, it was just ignored and not explored, since q-learning was all the rage for agentic systems before llms.
>multimodal LLM
llms aren't trained on a domain that'd allow them to reason. this should be fairly clear after they've trained a few llms on the entire internet and they still cannot reason.
>>
>>101591091
To be fair having flying cars is retarded. You've seen how people drive on the roads, now imagine how bad things would be if you gave people a flying car of all things.
>>
2 years ago it was argued that we would not get video generation for a decade. Now sora exists. And kling etc. And are only a short step from being usable and indistinguishable from real video. When I saw that my timeline went from 20 years to <5
>>
File: blame! poster.jpg (573 KB, 1850x1600)
573 KB
573 KB JPG
>>101594419
why? there are multiple systems with superhuman metrics already, there's just not a complete integrated one, and applied reasoning research is lagging.

cv surpassed humans in image classification in 2016, i think.
chatgot surpassed humans in knowledge.
gemini has superhuman attention - it can put entire movie into context and answer vague questions about single frames in it.
>>
>>101594438
I'm not sure what you meant by your first point, sorry. I think maybe we are in agreement there, though I'm not sure.

>they still cannot reason.
Their reasoning is poor I don't disagree. its just one shot next word prediction, I don't disagree that the reasoning capabilities are poor. But to say they are incapable of any reasoning whatsoever is maybe just... ignorant, or perhaps short sighted. It has a deeply flawed and somewhat basic of model for the world, but it's still modelling a portion of reality. It can produce sensible answers to (basic) questions it hasn't seen before.
>>
>>101594477
>chatgpt surpassed humans in knowledge
this took the wind out of my sails, I don't want to reply anymore. I am not convinced you're being serious. What a weird thing to say.
>>
>op the fag keeps bumping his shit thread just to call everyone low iq even though he is the dumbest person to ever use /g/
>>
>>101594489
they are practically incapable of reasoning, since they're not trained on the appropriate domain.

why do you think image generators still cannot generate fingers properly? they are trained on 2d -> 2d domain, so they have hard time grasping actual spatial relations, and cannot grasp them properly even when they do.

same with text generators.
>>101594502
it did, though. chatgpt knows more than any single human can. it know the entire internet. i don't imply that it has superhuman "wisdom" or ability to solve problems, i'm stating that it has superhuman ability to store and recall knowledge, which it does.
>>101594523
cry about it, sissy
>>
>>101594537
your first reply is pretty reasonable, I don't think I really have too much to disagree with. your assessment of domains being the problem is maybe not totally correct but I do agree having more senses would help mitigate certain issues.

Your second reply is somewhat nonsensical. I am far smarter than chatgpt is if I'm allowed to access the internet too. With a computer, I too have a superhuman ability to recall and access knowledge. So I feel that this is an unfair comparison and tells us nothing about the real capabilities of either me or chatgpt.

You are frustrating to talk with, not because you're a 95 iq'er midwit, but just because you do not think about your claims thoroughly and come off as an Elon Musk loving hecking science Redditor more often than not
>>
>>101589967
never, society collapses soon
>>
>>101593842
Depends on if you just want to create an intelligent system based on biological intelligence or specifically a human-like AI.
The latter requires consciousness (which is an emergent property of how our brains are structured) for it to be human-like.
The human experience is a very subjective thing.
>>
File: oldblame by c2162.jpg (142 KB, 527x803)
142 KB
142 KB JPG
>>101594612
>if I'm allowed to access the internet too
but this is not about access to the internet, it's about "knowledge" or long-term memory.

yes llms are insanely less energy efficient than the brain, and you need to train them in a specific way, but they can store more knowledge than any single human brain.

i brought this up just so that you reflect upon various metrics and goalposts being surpassed, and how they may not immediately come to mind when you don't have a complete system to measure.
>You are frustrating to talk with
because i'm thinking across multiple systems, and you, not being >140iq, cannot do this. i've tested this claim, and i haven't met anyone in person that would be capable of multisystem thinking. so to you my claim come off bizarre and nonsensical, but to me i just forget to be extra verbose.
>>101594632
nothing ever happens.
>>
>>101594657
mumbo jumbo
>>
>>101594832
The trick to gaining knowledge is to engage with the perspectives of others.
Feel free to ask about whatever it is that is confusing you.
>>
>>101594850
you merely imitated my post >>101593901
>>
>>101594881
For someone with good taste you sure seem incoherent.
>>
>>101594894
human conversations tend to lose coherence if they go for too long. eerily enough, just like the llm output.
>>
>>101594928
Well, that makes sense. Humans have only a limited amount of memories.
Eventually both parties run out of topics to talk to.
>>
>>101594676
>because i'm thinking across multiple systems, and you, not being >140iq, cannot do this. i've tested this claim, and i haven't met anyone in person that would be capable of multisystem thinking. so to you my claim come off bizarre and nonsensical, but to me i just forget to be extra verbose.
are you familiar with terrence howard
>>
I don't see any meaningful progress torward AGI aka the Singularity Machine God that'd grant us immortality.
>>
File: thepill.jpg (256 KB, 900x506)
256 KB
256 KB JPG
>>101594995
>terrence howard
no, but google tells me he's insane.

look, i have run a test on all the legitimately smart engineers i've met across my career(8 people), and no one could solve my multisystem reasoning test that i came up with way way back, accidentally.

you may cope by saying i'm insane, but reflect upon a nigger saying "but i ate breakfast this morning", because he doesn't have mental facilities to process and internalise a certain phenomenon. to that nigger you may seem insane, talking about him not eating his breakfast hypothetically.
>>
>>101595128
>it's le heckin religion because i(a 95iq individual) can only process it through religious imagery
fascinating cope.

please note that i do not call everyone itt a midwit, since many posters do not actually reach the required intelligence to qualify.
>>
>>101589967
many more generations of computer scientists researching and optimizing the encoding of operative knowledge
we're not close as long as we're not able to automatically perform even highly formal tasks like generating optimizing compilers from a pair of language and platform specs
>>
>>101595276
another belowwit post.
>many more generations of computer scientists researching and optimizing the encoding of operative knowledge
do you not understand that deep learning revolution was about using minimisation to automatically optimise the encoding of systems?

it's stunning to me how people can completely miss such commonplace things.
>>
>>101590113
>will never comprehend the intricate beauty of the crystal palace that is mathematics

>comprehend

Key word here. Latest demonstration by deepmind show that a considerable portion of mathematics will be solvable sooner than we expected. Deepminf already managed to get silver on the recent mathematical olympiad so we can loosley infer its at least a top achiever on a high-school level.

The fact that it can problem solve something like mathematics at a human level while not understanding what it's doing or honestly I'd argue even having the capability of understanding at all probably implies it's going to require human interaction with it to a considerable capacity I'd say the time where it enhances current human intelligence in orders of magnitude is going to be between our lifetime.
>>
>>101595338
>while not understanding what it's doing
maybe it found relevant systems in training
>>
>>101595272
I was being cheeky, not derogatory. I think your timeline is too optimistic, there are unknown unknowns.
>>
File: 1721756315894290.png (60 KB, 346x357)
60 KB
60 KB PNG
>>101589967
I think being slightly optimistic we are still 5 years away from AGI, meaning an AI that can generalize knowledge across all fields. We probably will have the concept for it sooner, maybe by next year, but the actual infrastructure and implementation I'd say it's still bound by other things other than knowing how it might work.

That said I think after AGI it's going to take about a decade before we have ASI, what I mean by ASI is a super computer able to.manage billions of Robots and what not, again just because of the time it takes us to actually build and implement it.

So AGI I'd say 5 years before it's fully implemented. For ASI probably 10+.

Ironically I think we'll reach singularity in 6 years just won't be able to catch up with discoveries at a pace that we are gully living in an Utopia until we have ASI.
>>
>>101595485
the only 2 unknown unknowns are:

-data to train planners. there's ai gym, there are robots, and frankly i think large chunks of planning can be done algorithmically.

-adaptability. maybe planner cannot adapt to new situations without retraining, though judging by how well llms perform on text, that is unlikely. action sequences can be broken up as well as text, since text is sufficient to describe them accurately.
>>
File: oj8w72d8f7uc1.jpg (124 KB, 2048x1098)
124 KB
124 KB JPG
>>101595486
>what I mean by ASI is a super computer able to.manage billions of Robots and what not
that doesn't require asi though. that's just agi with decent enough tools.
>>
lazy bump
>>
>>101595851
>bumping the dumpster fire retard "science" larp AI worship thread
>>
>>101589967
20-40 years as the current hype will discover that the current methods have plateaued and you can't achieve AGI by scaling up the transformer architecture with a million overpriced GPUs on the same datasets and methods from 2019
>>
>>101595136
you claim to be smarter than almost anyone who has ever lived. you say you have some kind of new or advanced system of reasoning. although your explanations are not very good, and you often come off more as a sci-fi enthusiast than anything, you insist that you're actually profoundly intelligent.

can you demonstrate this for us? maybe let us have a look at your "multisystem reasoning test"? do you have an inventions or discoveries to share? anything you've made/helped to make?
>>
>>101596282
>you claim to be smarter than almost anyone who has ever lived
no, anyone that i've ever personally met, which is a relatively small number of people.
>profoundly intelligent
>some kind of new or advanced system of reasoning
lol no, i can just think about multiple interacting systems a bit more broadly than normal engineers. it's just a function of my brain, you can't learn it.
>can you demonstrate this for us?
no. much like you wouldn't be able to explain hypotheticals to a nigger.

the lesson here is not to get defensive, but to reflect upon our own limited capabilities in respect to potential entities like agi.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.