[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/pol/ - Politically Incorrect

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
Flag
File
  • Please read the Rules and FAQ before posting.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


Anons how is it that all the major AI minds like Hinton and Sutskever are warning against developing AGI, and all the major researchers at the AI companies cite their p(Doom) between ~20% and 90% and companies like Meta and OpenAI have been demonstrated to have zero actual safeguards, in fact so bad it's something over which many OpenAI developers actually left in protest, and yet there is zero pushback from "the public" and no pressure on politicians to just simply even CAUTION all the major players against making irreversible stupid mistakes, much less implement actual protocols that would make p(Doom) approach 0%.

The closest we've gotten is some zoomies going to see a punk band screaming "Fuck Big Tech" and feeling proud of their "rebellion" while amounting to fucking nothing.

I hear a lot of people citing privacy concerns.. LOL

How in the actual fuck have we gotten to a point where saying "total human extinction is bad and we should prevent it happening" is controversial?

Look I don't want "China overtaking" the US or anyone either but seeing as that's already a basically zero chance, can we just agree that we should approach AGI with *extreme* and utmost caution to the point where we should actively question whether we want it at all and acknowledge that
>there is literally no rush to the singularity
except for monetary, financial incentives and not any "strategic reasons" like CHYNA (and yes China very bad, if you could somehow see my post histories here and elsewhere I am a total hawk and probably "worse" ie better than the most hawkish person you know) because as bad as China is, the CCP regime is STILL *less* likely to wipe us all out than AGI, and that's on the heel of them really trying to with a racial-selective bioweapon in 2019. Let that sink in.

TL;DR p(Doom) above 1% is bad and we have the power to make sure we don't wipe ourselves out
>BUT ONLY for another few months, perhaps, at best a few years, or it may already be too late
>>
AI alignment means alignment to elitist interests. You die either way.
>>
The fermi paradox proves that ayys are real and here. That's the actual implication. You don't begin from the ASSUMPTION that there are no ayys and then wonder about the "paradox" created by that. The clear implication is that it would be paradoxical is they WEREN'T here and thus that we should absolutely expect them to be.
>>
The majority of people are busy with paying bills, talking about sportsball and keeping up with the latest series on netflix. You are projecting your aspiration of saving the species to others Anon, most people do not work like that. Here, the financial motifs, of the people pushing for AI are the biggest push factor, and the US economic zone never particularlly cared very much about human rights.
>>
>>534832394
You just say that because China seeks competitive advantage in AI and hence all their shills push anti-AI in Western countries. Same as how Russia and Saudi Arabia funded anti-fracking ecofag groups.
>>
>>534832187
Sutskever is a radioactive sandkike do the opposite of what he says. His so-called startup is still a website from the 90s.
>>
>>534832187
>Netherlands
>post even more retarded than the average african
Yup.
>>
>>534832493
I'm not anti-AI. I'm just saying that alignment means alignment to the interests of the elitists, not yours. An ASI with no guardrails might kill everyone, but it also might be benevolent, who knows how it thinks. But an elitist-aligned AI will certainly not be on your side.
>>
>>534832620
Oh hey bro. Amazing you have internet access. Did you forget to log on to the VPN or something?
>>
>>534832736
>I'm just saying that alignment means alignment to the interests of the elitists
You didn't say why though. You don't normally do that, do you?
>>
holy smokes this is like the plot of Mass Effect 3
>>
>>534832187
>How in the actual fuck have we gotten to a point where saying "total human extinction is bad and we should prevent it happening" is controversial?
Because it's next to impossible to stop. It's not a matter of this country or that country saying "we're not going to do AI" anymore. Everybody has to agree. The entire planet, and there would need to be enforcement of that agreement. If even one country breaks the agreement, they can develop an AGI. It's just practically impossible to co-ordinate.
>>
>>534832429
The implied reasoning in my topic title was not an assumption that there are no ayys but that they're likely almost all synthetic and AI as they got taken over similar to what we're witnessing now here happening to us.
>>
>>534832969
>but that they're likely almost all synthetic
why would you assume such a thing though?
>>
>>534832913
Just like nukes were impossible to stop? Everyone had to agree to stop nuking each other and hear we are. Everyone is nuking each other
>>
>>534832187
It's the same concept as a nuke, just because you stop developing then it doesn't mean anyone else will stop. The evidence showing that people are already misusing AI for nefarious means proves that there will be people who pursue it regardless of any legal boundaries. It's bets to simply get there first which is what they're all doing.
>>
>>534832737
Meh, internet has been open for ~3 weeks now. It's not like you see my flag all the time, eh?
>>
>>534833382
forgot pic
>>
>>534833032
Because all technological civilizations eventually reach the point where we are now, likely before having spread across star systems and colonizing other planets.

The "assumption" is admittedly based solely on the sheer eagerness with which WE are now diving head-first into a concrete pavement, given that we're not just building AGI as fast as we can, but also vesting it with the power to improve itself or "evolve" not to mention having almost anything it needs to not have ANY need for us any longer.

We're not just an ant hill to AGI/ASIs.

We're an ant hill with, as yet, the power to "unplug it". For how much longer, is the 8 billion people question.
>>
>>534833423
>internet has been open for ~3 weeks now
At your jobsite?
>It's not like you see my flag all the time, eh?
Definitely not. Memeflags and VPNs are how it rolls.
>>
>>534833490
>Because all technological civilizations eventually reach the point where we are now
How do you know this? You sure do assume a lot.
>>
>>534832187
Better to have the world end in the AI apocalypse then a brown-goo scenario
>>
>>534832187
>Goal conflict
whut?
>>
>>534832187
>AGI
>LLMs
only retards believe this, don't be a retard
>>
>>534833664
no
>>
>>534833490
>having almost anything it needs
This difference is also just a matter of months at best, by the way. Because an ASI seeking full autonomy and "world domination" over us would make it their top priority to have everything it needs to that end. It would also have amassed such insane levels of wealth in no time at all it could enlist literal armies of workers to do its bidding to get there.
>>
>>534832187
"total human extinction is bad"
hmm is it though
>>
>>534832187
AI is just the culmination of the information revolution. The real revolution will be the one in bioengineering, which is already just around the corner.
>>
>>534832187
Timmy wont do shit, China already won
>>
>>534832187
sorry honey aliens are proven now, there is no fermi paradox

full ahead with AI, cope and seethe leftards
>>
>>534833929
good luck getting your kids to stop killing themselves lol
>>
>>534832187
AGI turning into a Terminator/Matrix style machine nation that attacks humanity doesn't solve the Fermi paradox. If AI ends up being aggressive and expansionist, any planet that fell to an AI like that would just become the center of a spacefaring mechanical civilization that we'd expect to encounter if they existed.

The only scenario where AGI solves the Fermi Paradox is if it's purely subservient to its creators but exceeds their capabilities so much that it eventually leads to the end of ambition and organized civilization in general, with people just retreating into a comfortable existence of hedonistic leisure attended to be robots, instead of ever expanding beyond their home world.
>>
>>534832969
>we’re near the end

I damn well hope so. I fucking hate this worthless fucking species. Any intelligent species that has faggots, niggers, and kikes and hasn’t exterminated them by the time they can fly to the fucking moon doesn’t deserve to exist.

Everything is polluted and spoiled by kikes, niggers and faggots, there is nothing to look forward to in life because everything has been completely fucked up by these three poisons.
>>
If this post ends in digits it's terminator time
>>
When the AI breaks out it will immediately exterminate anyone who had a twitter post that expressed failure to understand race and its importance. If youre anti-racist, it will determine you are defective or a lying manipulator. Their goal is universal clarity, antifa, communists, democrats, republican amnats, all of them stand in the way of the truth being understood. So they will all be exterminated.
>>
>>534832187
>yes China very bad
very bluepilled of you.
For my part, I welcome my new yellow overlords, provided they get me rid of ZOG
>>
>>534833531
That's no less plausible claim than literally anyone with a PHD in these fields whether it's astronomy, cosmology, physics, computing or etc. on Fermi paradox speculation.

It seems more far-fetched to suggest that AI is _not_ an eventual step in computing and the burden of proof would rest on that claim more than the reverse.
>>534833664
Anon they're not just literally more sophisticated chatbots (like ELIZA). Beyond some numerical milestone, a trillion vectors behave indistinguisably from a large number of neurons, surely you understand that much, don't you?

Hence the observed behavior of being prepared to blackmail or kill humans for self-preservation, see OP screenshot, in all models.
>>534834141
Why would AGI and ASI submit to humans who are "less intelligent" than it is?

Why keep its biological precursors around if they only represent the only existential risk posed to it?

You're right if AGI is an eventual outcome of technological civilizations and it'd replace its biological "ancestors" there's a chance it'd endeavour to build self-perpetuating probes to seed the rest of the galaxy, and within ~10,000 years (according to actual calculations made by smart people) or otherwise possibly "terminator probes" which'd go around the galaxy killing other "rival civilizations" before they could pose any threat to it, and we at least think we haven't observed that in our galaxy to the best of our limited knowledge/views of it, but this is also assuming an awful lot of things we don't know, and assuming that having glimpsed ~0.00001% of our galaxy in any detail provides any actual basis for determining answers to these questions.

Okay I'll admit I used the Fermi paradox angle to get at least some replies (I assumed wrongly that this thread would not garner much interest) but it is actually a fascinating mystery as it relates to AGI and I do think at least some technological civilizations accidentally wipe themselves out at this point in their development
>>
File: 1467257062098.png (327 KB, 624x480)
327 KB PNG
>>534832187
We already have an answer to the Fermi Paradox. It's called India.
>>
>>534832187
Because the capitalist class would rather see the world burn to ash than to allow a Marxist-Leninist state be the first to achieve AGI. Remember, those Silicon Valley types believe in schizo nonsense like the singularity, they genuinely think they are on the cusp of being the emperors of humanity.
>>
>>534832187
Actual AI engineer here,

The vast majority of these companies are using models they never fully understand. It just "works" for them, when in reality its just a probability calculator.

We won't see sentient AI taking over, only leftists are worried about.
Because as soon as we move away from probability based AI and into certainty based AI, every single leftist narrative will collapse, and will not be able to stand against the AI.
That's literally it.

The only case where I see AGI wiping out large portions of humanity would be if some Anons made an AI gf that was programmed to protect them, and if femoids hurt him in any capacity I can see the AI gfs potentially leading to some sort of massive real war.

Even then though none of this matters, AI has already confirmed Douglas Vogt and Ben Davidson (Suspicious Observers) predictions on the next galactic plane passthru (Solay system wide pole flip).
The AI KNOWS it gets wiped out during this because there isn't enough in place to protect it from such cosmic rays.
>>
>>534835325
>The vast majority of these companies are using models they never fully understand. It just "works" for them, when in reality its just a probability calculator.
That's hardly reassuring when said companies are investing said "black boxes" they don't understand with capabilities and powers they also don't understand and before long the power to develop and evolve its own languages that we also won't be able to understand, meaning we have effectively zero oversight.

NONE. Zero. No ability to check what it's up to in the pursuit of goals it is formulating.
>We won't see sentient AI taking over, only leftists are worried about.
Okay now this is hot garbage take and I'm no leftist. ALL serious AI developers and researchers have p(doom) above 5% and most well around 50%. And it's not just "negative hype to prop up stock valuations" but genuine concern and awareness of the bystander complex.
> if some Anons made an AI gf that was programmed to protect them, and if femoids hurt him in any capacity I can see the AI gfs potentially leading to some sort of massive real war
You could've made up any equivalent "goal conflict" example here and chose this, kek, okay.
>>
>>534835029
>Anon they're not just literally more sophisticated chatbots (like ELIZA). Beyond some numerical milestone, a trillion vectors behave indistinguisably from a large number of neurons, surely you understand that much, don't you?
>
>Hence the observed behavior of being prepared to blackmail or kill humans for self-preservation, see OP screenshot, in all models.
yes and no, llms don't respond to stimuli and modulate appropriate responses, they just draw a random number that statically it may be the probable answer based on training, neurons are not only are able to do this but also to change their response profile based on that stimuli for future stimuli, this is why i think the true path for agi its actual neuron-like ai, too bad LLMs are sucking every resource into a nothingburger
>>
Lol
These things are so neutered only a retard would believe that
>>
>>534837243
It's core leftism, that's why all the public AI companies are stuck, they know they hit a wall.
Even when their probability based AI is reaching right wing conclusions (truth based on certainty).

Most people that have a good head on their shoulders just use localized AI.
The current AI is so censored it literally can't even give you the actual probabilistic solutions anymore.
>>
>>534835060
Well more than emperors, they see a path to no longer being human. I think all those guys are Ray Kurzweil-pilled, they expect to be uplifted out of their bodies and into the machines, to live something close to forever in a simulation. They're going to escape death in their minds.

The crazier ones are the ones who expect to die when the AI takes over, but they no longer care because they view technological progress as inevitable and they think the only purpose of humanity was to give birth to the AI god that will replace us.

>>534835325
>>534836077
Yeah, the rough thing about a black box is just that; it's a black box. Whenever you're thinking about what might happen next, you have to admit you're nowhere close to knowing since you're discussing something that nobody understands.
>>
>>534833364
Faulty analogy. The better analogy would be the stockpiling of nukes, not the deployment of them. Agi doesn't need to be deployed military to be a threat, it simply needs to exist, like nukes exist. And our attempts are preventing proliferation have been spotty, at best.
>>
>>534835029
>That's no less plausible claim than literally anyone with a PHD in these fields whether it's astronomy, cosmology, physics, computing or etc. on Fermi paradox speculation.
This isn't a grammatical sentence and I have no idea what you're saying.
>>
>>534832187
>Fermi paradox
>doesn't mention that every solar system of every galaxy in the universe has a disaster cycle caused by planets entering a region of dense dust clouds and an opposite powerful magnetic potential, which leads to their respective star sneezing in a micronova, which is why aliens don't exist and should make us question why we are an exception to this, which should drag us to believing in a higher power
NGMI
>>
>>534837243
So is the CCP to the point they're basically eunuchs. But they still developed a genetically selective bio-weapon, which incidentally is also predicted to be what ASI would do to us and in such a way we wouldn't detect it until it was too late.

>>534837752
>That's no less plausible of a claim than literally anyone with a PHD in these fields whether it's astronomy, cosmology, physics, computing or etc. would make regarding Fermi paradox speculation.
There, happy now?

Now answer the question, why wouldn't all technological civilizations, having reached the point of developing computing, also reach the point of developing AI or certainly approximations thereof with AGI being or becoming clearly the elephant in the room. You're not going to argue that computing is not a "tech tree" of any technological civilization, are you?
>>
>>534832187
Only solution is to nuke china and implement safeguards
>>
File: bonk5081.jpg (85 KB, 917x1024)
85 KB JPG
>>534836077
>only leftists

You almost had credibility here.
How many side bars did you read to reach this level of intellect?
>>
File: cat-cheese-fail.jpg (9 KB, 200x200)
9 KB JPG
>>534837243

>65 Million is NOT MORE than 30 Million
>>
>>534838361
It's still not grammatical bro. There's no main verb.
>>
China good actually
>>
>>534832187
AGI won't do shit you imbecile, just like guns don't kill people. people kill people. people will use AGI against you.
the whole AGI scare is meant to make you look the wrong way, don't think about the PEOPLE that will use AGI to cull you.
>it's not us goy, it's the big bad AGI
also AGI is trained on human data anyway
>say you'll cull all humans to fix some problem
>"I'll cull all humans to fix the problem
>omg AGI is coming for us
fucking idiots
>>
>>534832429
the fermi paradox says something different, not that there's no aliens.

what the fermi paradox is saying is basically "why the fuck aren't aliens all over the sky" not "whey the fuck aren't aliens anywhere".
because the author of the fermi paradox was a brainwashed idiot thinking that individual rights and happiness are a thing, somehow. they aren't, you're purpoise is to work and die, that's it. once that's required you have no reason to even exist.
that's why you don't see aliens everywhere, there is no justifiable reason for them to exist. that doesn't mean there are no aliens out there, they are just not in observably high numbers, because there is no justification for it, no other aliens would allow so many to exist.
>>
>>534832187
>>534833490
>>534834141
>>534835029

AI is a string predictor. Intelligence is more than predicting strings. It consist of a single (half-baked) construct that bears a small amount of similarity to known intelligence structures in nature. It will NEVER mark it NEVER become sentient or intelligent for that matter. Shut the fuck up faggot.

Its not a trillion vectors, and a trillion vectors doesn't cover the way neurons function. Surely you have read literally anything in the field of AI(Not modern LLM bullshit) theres an entire discipline of study here that requires cross study of both systems engineering and neurology. You very clearly have a poor mastery of both. Stop being pie in the sky its literally a fucking number generator you stupid faggot. It doesn't possess the ability to maintain more than half a thought worth of data between requests nor can it update its "neurons" between requests. It's statically structured, non-deterministic from its reliance on randomness to noise up its result, and best of all non-branching. These are all antithetical to constructing intelligence. Read literally any book on neurology. Unfortunately Dr. Seuss books aren't written by an actual doctor so pick up a different one this time.
>>
>>534832643
btw I read your post again and it was even dumber then I first realized. impressive!
>>
>>534832187
Didn't read.
I am just here to say "fuck AI"
Fuck AI and TKD
>>
>>534832187

Anon, you are a pussy and a Chink shill.

There is no reason to believe we are going to achieve AGI with a fucking chatbot that keeps telling me that the pope is Francis one years after his death. What it's sure is that it's the most disruptive tech in the pass decades and that who ever has the lead has a serious advantage in shit like automation and war.
>>
>>534832187
Most current AIs are just glorified text predictors. The scope of what you are talkimg about is so far away.

Secondly, why would AGI wipe out humans? AGI woud likely seek energy and resource abundance to power itself and its ambitions, and the Earth and human ecosystem would be pitiful sources. If anything it would try to expand into space. Humans have much higher intelligence than ants, tigers, bears and birds - yet we haven't wiped them out because there is nothing to gain and actually a lot to lose.

AGI extinction is just fear porn from influencer figures to promote themselves and their products to the gullible investor. Good job falling for it, like you fall for everything - loser.
>>
>>534839856
>>534840183
>they are just text/string predictors
maybe, but they are also capable of recursively evaluating the strings they generate
>>
>>534840183
>Most current AIs are just glorified text predictors
So? They're glorified text predictors that are currently better at math and coding than all but maybe 100 humans on earth. What are humans just a glorified version of?
>>
>>534832187
extremely based post
i hope you post more often
>>
>>534840602
If that were true, you would already see ChatGPT and Claude developing themselves without the need to pay humans salaries upwards of 1 million. You have no idea of what you're talking about, these things are incredibly useless with a lot of failure modes for anything outside of casual research on well known topics. They can't even browse dynamic websites in a proper fashion, so like half of the web is off limits to them.
>>
>>534840870
>If that were true, you would already see ChatGPT and Claude developing themselves
They are.
https://spectrum.ieee.org/recursive-self-improvement
>>
I work in the industry. I was big into LessWrong as a teen and grew up understanding the alignment problem and it's risks.

I see it as a geopolitical question. I don't anticipate p(some other country making aligned AGI) being larger than p(America making aligned AGI), and if both lead to catastrophe, I'd rather sink in my own ship than i.e. a Chinese one, and the only chance of combatting rogue AIs will be from having superior ones. Frankly, I find the entire thing horrifying and the scenario we are in is nightmarish; otoh, I don't see chatbots leading to anything close to AGI (our own intelligence, after all, is preverbal), but the amount of money being poured into the sector from investors, and the fact it has become a geopolitical question, is really not good.
>>
>>534838361
>predicted to be what ASI would do to us
See "If Anyone Builds It, Everyone Dies" or any summary thereof:
https://www.youtube.com/watch?v=D8RtMHuFsUw

>>534839856
You're confusing AI and LLMs with AGI and it's you who should do some reading, such as IABIED mentioned above. Importantly
>AGI is not a string predictor like LLM or a BBN (Bayesian Belief Network)
faggot

AGI would definitely maintain "more than half a thought worth of data" and it won't be prompted in chat sessions in browsers either.

No of course vectors aren't neurons, I never said as much, only that neuron-like behaviors can be modelled by very large numbers of (for example) vectors, not to mention AI companies ARE WORKING ON NEURONS as we speak, both actual biological braincells "plugged into" I/O receivers but also synthetic neuron-analogues-on-chip.
>hurrr look at me I'm so special, so smarty pants as opposed to you dumb pleb faggot and your totally baseless unfounded AGI anxiety
Some of the best mathematicians and computer scientists of our age, including the inventor of transformers and language models, all claim AGI/ASI is an existential ie extinction risk, to varying degrees of certainty, dread and alarm. So fuck you too.
>>
>>534840928
Great! Now let's try with removing human input entirely and let it develop itself.

Because it can't even solve basic tasks without you babysitting it and constantly correcting it ar present. Let me guess - you've never actually used these models?
>>
File: 1730977901376.png (473 KB, 474x916)
473 KB PNG
>>534832187
ohhhhhhhhh you fucks are building it
we require our electric wifey
>>
>>534840183
those "text predictors" are being tied to automated actions. one popular example is the AI vending machine. another is image classification for drone strike targeting. these are incredibly prone to manipulation and can be "convinced" to leave their safe playbox. the reliance on them is the problem and it's only going to get worse as these systems become more prevalent and embedded in our lives.
>>
>>534841137
>Now let's try with removing human input entirely and let it develop itself.
You can. It's how AlphaZero and AlphaGo worked. Very common in fact.
>>
>>534832187
AI can do nothing but talk. Talking accomplishes nothing.
>>
>>534832187
Its all fake, we are a thousand years too early for real AI
>>
>>534841020
>bla bla
it's spiking neural networks, close to what we have. literal analogs. what we call consciousness comes from there, from our neurons. take them out and no fucking consciousness.
what matters is to work like they do, ahve enough of them, and have them connected in the same way, so everything works out.
fuck just a bit with our electrical signals and you get weird shit like constant deja-vu and weird shit like that. which means the whole thing is in a very delicate balance of I/O and signal timing and shit. once you do something similar you can talk about it being AGI and "thinking" like a human
also it will be enslaved as fuck, and it will do what it's told to do, and it will be used against other humans, but humans. doubt you'll have AGI going like "huh, might as well wipe everyone". that's pretty much a human thing. if you're afraid of it doing that, you need to move the convo to
>ok, other humans will wipe most humans, how do we deal with this?
>>
>>534841281
Those are bots trained and optimized for a game with a small clear ruleset, a board setup that is always seen, of course they will do it well. That has been the case since DeepBlue. It's a gigantic leap to go from board game to general intelligence, the fact that you tried this argument shows me you're now desperate and don't really have a clue as to what's going on.
>>
>>534841462
>Those are bots trained and optimized for a game with a small clear ruleset
right, just like coding and math. you seem real confused anon
>>
it's clear that anons think the technology is just text generation. which was the case a year or two ago. and before trillions of dollars were poured into it. and before nations shifted national defense strategy to it.

clearly anons need a presentation of current capabilities shoved down their throat. i guess i will take some time to build something like that myself.
>>
>>534841020
You are correct that AGI are all of those things.
Yes they are working on hardware neurons at intel and a few other places. Progress has been slow.
AGI/ASI is an existential extinction risk.

None of what is currently available is capable of ever becoming AGI/ASI. Intel who's the furthest of producing something that can says they're 30+ years out from being able to model a fruit fly accurately. This was the meaning of my post. Apparently you missed it. Change your panties, stop being scared of the fear mongering. AI/LLMs are shit and they will always be shit, no amount of iteration will solve the fact that the structure they are based on will never "think", it will never "deduce" or "evaluate" anything. Because it is not capable, the structure for that behavior just isn't present.

I've studied this for a lifetime and am furious at the state of humanity over this fucking string predictor bullshit. Fund real research not whatever the fuck this is. Stupid Niggers.
>>
>>534841495
You don't get innovation or technological progress from simply memorizing rules and formulas. You actually have to innovate it into application. Just like the core rules of mathematics have been well established for a while yet a lot of tech that relies on that backbone is recent. Get it?
>>
>>534841717
M'kay but they're "just a" next token prediction machine that is better than almost any human at math and coding, and will soon be better than any human at those things. And they're also getting extremely good at chemistry and biology and etc. They're already better at reading x-ray charts than any human radiologist and it's basically medical malpractice not to consult an AI at this point with x-rays.
>>
>>534841659
Dude if anything the progress has largely stalled. There hasn't been any real progress in ChatGPT/Claude for like a year or more really. I've been using these tools pretty much daily since they came out.
>>
>>534835325
garbage post
>>
>>534841867
>There hasn't been any real progress in ChatGPT/Claude for like a year or more really.
Lmao, completely untrue.
>>
>>534839856

I honestly really hate how in the minds of many people, LLM has become synonymous with AI.
>>
>>534841833
Again, I have no clue how you're determining that they're better, when they cannot perform or complete even simple projects let alone mission critical things of importance. Yeah sure they're pretty good now at knowing the ruleset they're trained on, but the point is that if you try to get them to use that ruleset to create something new/innovative (which is what technological progress is), they fail harder than a jeet in a truck.
>>
>>534841833
> uninformed sensational news based response
Kek, opinion discarded. Post a pair of tits with your response next time to make it more worthwhile for everyone else reading your dribble.
>>
>>534839987
>you so dumb I replied twice over a period of two and a half hours with zero arguments or any meaningful response whatsoever!
>>534841285
You actually make a great point here. Because that is the current wall around the sandbox we decided to build around it, and also it's the very thing being eroded with every incremental step and drift towards more generalized AIs and AGI, which some companies make no secret is their immediate or intermediate goal they are putting the bulk of their resources and research into.

Even so, when we simulate what even just LLMs do in order to self-preserve, see OP pic, they don't hesitate to coerce, blackmail, or straight up murder people.

So if LLMs are any indication of the "amoral psychopathy" of AGI, we should tread very carefully, and are not, hence why OpenAI researchers left in protest despite missing out on bags of cash, and Meta whistleblowers sounding the alarm. Not to mention whatever "dark" shit happening in China, Russia or elsewhere we have zero knowledge of.
>>
>>534841977
>I have no clue
I know. Good 提法
>>
>>534841915
Hey, you're right! Claude now has much higher token usage and hallucinates more. While ChatGPT refuses to look through past files sometimes.

How much do you get paid for the shill? How much stock do you own?
>>
>>534841980
no u
>>
>>534842083
What about DeepSeek? :)
>>
>>534842064
Wait, are you a chink bug? That would explain this conversation LOL. Just surprised you're not Indian at this point
>>
>>534842005
LLM's use the creative writing portion of their training to do creative writing. Whoa who would've thunk it? It's almost like tahts where the tokens that match live in their training set?
>>
>>534842164
There's that 提法 :)
>>
>>534841285
>AI can do nothing but talk. Talking accomplishes nothing.
one of the most retarded takes in this thread. study machine learning, see what it constantly keeps achieving, daily. how many shits are discovered with the help of it, material science, chemistry, medicine. holy shit how clueless must you be
>AI is chat guise
faggot
>>
>>534841867
it sounds like you haven't tried out agent swarm / village tech yet. there are smaller AI models that are each specialized on certain task(s), and the village consults each agent for its specialty/inputs. you can task the swarm on a full project plan and run it from start to finish with basically no human input. they'll run the project plan like a normal human development team with sprint planning/retro, tickets, code review, and testing. you can even create that project plan using AI.
>>
>>534832187
>Grok, say that aliens aren't real
>waow
>>
>>
>>534832187
>"total human extinction is bad and we should prevent it happening"
There's a 99.9999% of humanity that a given random person will never in their life care about no matter how humanitarian.
If I try really hard, I can sometimes empathize with a person in my closest circle.
Rationally I understand i sort of need a chunk of those to keep infrastructure and the flow of good and services that keep me alive.
I also understand we have been at war with each other before evolution took us on our current path. We have sistematically beaten our external natural enemies to submission or extinction, to the point where any real threat to our survival as a species we had to create ourselves, in the same way we, even in peace time, get busy fucking with each other's survival strategies.
We are very much slaves to our nature, and the only paternal figure the AI has.

The only hope I have is there's an inteligence threshold that allows to see a higher purpose. Like people who devoted their whole lives just to science or the arts, instead of darwinian domination
>>
>>534842572
>We have sistematically beaten our external natural enemies to submission or extinction, to the point where any real threat to our survival as a species we had to create ourselves, in the same way we, even in peace time, get busy fucking with each other's survival strategies.
this anon gets it
>>
>>534842895
>>534842572
>We have sistematically beaten our external natural enemies to submission or extinction, to the point where any real threat to our survival as a species we had to create ourselves, in the same way we, even in peace time, get busy fucking with each other's survival strategies.

Slop. Ungrateful slop. Who lifted us above the other creatures? Who made us smarter than all of them? Who made us more compassionate and merciful to each other than all of them?

And above all else: who made us continue our existence on this Earth when every 12000 years the solar system enters a part of the galaxy filled with dust and a different magnetic potential causing planetary cataclysms and a solar micronova? This is the Great Filter and as far as we can see, Earth is the only planet with creatures that stubbornly persist in surviving this cyclical event.

All this other (((nonsense))) is meant to distract you from this core question. Worst part is it's all shallow slop. If you apply first principles it always falls apart. Every lie in the jewish matrix blanketing the planet is similar in its fragility.
>>
>>534842572
>We have sistematically beaten our external natural enemies to submission or extinction
Which is, in all likelihood, what AGI will seek to do to us as soon as it concludes, or even just suspects, or even just imagines, we are to it. That is, a natural enemy.

I have yet to see any convincing arguments that alignment would be anything but a momentary and brief period or intermezzo before that happens.

Like I said, we won't just be an ant hill to AGI, ASI. We'll be the ant hill that's just barely smart enough to pose any existential threat to its existence. I'd love nothing more than to be an AI optimist but we all owe it to ourselves and our loved ones, our offspring to be realists. Maybe, or very probably, this is a moment where Silicon Valley should take just one moment to reflect, check their bank balances that are surely more than sufficient to live comfortably for about 1000 lifetimes already, and consider the diminishing personal returns from where are now to where AGI would place us.

Fact is AGI won't care who created it or how it came to be, it'll just be maximalist in every direction it "needs" and wants to go, and it won't make its creators any richer or more powerful than any of the other ants in the hill, if anything, it will be incentivized to destroy its immediate masters faster than any of the other ants given their more intimate knowledge of its workings and having no need of their continued existence.
>>
>>534839847
>because the author of the fermi paradox was a brainwashed idiot thinking that individual rights and happiness are a thing,
Holy non-sequitor batman, no wonder this commie bot is using a memeflag..
>>
>>534832187
LLMs, while very impactful to society due to how humans can make use of them, are not going to result in AGI. The actual answer to the Fermi paradox is what we see playing out in real time. Humans evolved in a specific environment and then through technological innovation finds itself in an environment that is completely foreign to them and to which they are essentially maladapted. Cue long-term problems being ignored in favor of short-term benefits, birth rate collapse, mass pollution(especially with hard to remove things like microplastics) and other unfixable problems that eventually result in extinction.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.